2026-03-09T16:59:12.119 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T16:59:12.122 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T16:59:12.141 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/573 branch: squid description: orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_cephadm_timeout} email: null first_in_suite: false flavor: default job_id: '573' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 1 mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_REFRESH_FAILED log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - mon.a - mgr.a - osd.0 - client.0 seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm01.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPglkSCWx4QFrrsHA+0Raog+ziEFyIEyviCPCUdKLEBwRCQJ4xppWGJ0hFc2iYnKYnlDfWuFLrbEE2wZSYBgCWY= tasks: - install: null - cephadm: null - workunit: clients: client.0: - cephadm/test_cephadm_timeout.py teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T16:59:12.141 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T16:59:12.141 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T16:59:12.141 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T16:59:12.141 INFO:teuthology.task.internal:Checking packages... 2026-03-09T16:59:12.142 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T16:59:12.142 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T16:59:12.142 INFO:teuthology.packaging:ref: None 2026-03-09T16:59:12.142 INFO:teuthology.packaging:tag: None 2026-03-09T16:59:12.142 INFO:teuthology.packaging:branch: squid 2026-03-09T16:59:12.142 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T16:59:12.142 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-09T16:59:12.747 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-09T16:59:12.748 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T16:59:12.749 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T16:59:12.749 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T16:59:12.750 INFO:teuthology.task.internal:Saving configuration 2026-03-09T16:59:12.754 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T16:59:12.755 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T16:59:12.761 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm01.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/573', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 16:58:29.384574', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:01', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPglkSCWx4QFrrsHA+0Raog+ziEFyIEyviCPCUdKLEBwRCQJ4xppWGJ0hFc2iYnKYnlDfWuFLrbEE2wZSYBgCWY='} 2026-03-09T16:59:12.761 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T16:59:12.762 INFO:teuthology.task.internal:roles: ubuntu@vm01.local - ['host.a', 'mon.a', 'mgr.a', 'osd.0', 'client.0'] 2026-03-09T16:59:12.762 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T16:59:12.768 DEBUG:teuthology.task.console_log:vm01 does not support IPMI; excluding 2026-03-09T16:59:12.768 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f8fca997e20>, signals=[15]) 2026-03-09T16:59:12.768 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T16:59:12.769 INFO:teuthology.task.internal:Opening connections... 2026-03-09T16:59:12.769 DEBUG:teuthology.task.internal:connecting to ubuntu@vm01.local 2026-03-09T16:59:12.769 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T16:59:12.827 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T16:59:12.828 DEBUG:teuthology.orchestra.run.vm01:> uname -m 2026-03-09T16:59:12.937 INFO:teuthology.orchestra.run.vm01.stdout:x86_64 2026-03-09T16:59:12.937 DEBUG:teuthology.orchestra.run.vm01:> cat /etc/os-release 2026-03-09T16:59:12.983 INFO:teuthology.orchestra.run.vm01.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T16:59:12.983 INFO:teuthology.orchestra.run.vm01.stdout:NAME="Ubuntu" 2026-03-09T16:59:12.983 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_ID="22.04" 2026-03-09T16:59:12.983 INFO:teuthology.orchestra.run.vm01.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T16:59:12.983 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_CODENAME=jammy 2026-03-09T16:59:12.983 INFO:teuthology.orchestra.run.vm01.stdout:ID=ubuntu 2026-03-09T16:59:12.983 INFO:teuthology.orchestra.run.vm01.stdout:ID_LIKE=debian 2026-03-09T16:59:12.983 INFO:teuthology.orchestra.run.vm01.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T16:59:12.983 INFO:teuthology.orchestra.run.vm01.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T16:59:12.983 INFO:teuthology.orchestra.run.vm01.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T16:59:12.983 INFO:teuthology.orchestra.run.vm01.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T16:59:12.983 INFO:teuthology.orchestra.run.vm01.stdout:UBUNTU_CODENAME=jammy 2026-03-09T16:59:12.983 INFO:teuthology.lock.ops:Updating vm01.local on lock server 2026-03-09T16:59:12.988 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T16:59:12.990 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T16:59:12.991 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T16:59:12.991 DEBUG:teuthology.orchestra.run.vm01:> test '!' -e /home/ubuntu/cephtest 2026-03-09T16:59:13.027 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T16:59:13.028 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T16:59:13.028 DEBUG:teuthology.orchestra.run.vm01:> test -z $(ls -A /var/lib/ceph) 2026-03-09T16:59:13.071 INFO:teuthology.orchestra.run.vm01.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T16:59:13.072 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T16:59:13.081 DEBUG:teuthology.orchestra.run.vm01:> test -e /ceph-qa-ready 2026-03-09T16:59:13.114 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T16:59:13.420 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T16:59:13.421 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T16:59:13.421 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T16:59:13.424 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T16:59:13.425 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T16:59:13.426 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T16:59:13.427 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T16:59:13.472 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T16:59:13.473 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T16:59:13.473 DEBUG:teuthology.orchestra.run.vm01:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T16:59:13.514 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T16:59:13.515 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T16:59:13.564 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T16:59:13.568 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T16:59:13.569 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T16:59:13.571 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T16:59:13.571 DEBUG:teuthology.orchestra.run.vm01:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T16:59:13.619 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T16:59:13.621 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T16:59:13.621 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T16:59:13.663 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T16:59:13.707 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T16:59:13.750 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T16:59:13.751 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T16:59:13.799 DEBUG:teuthology.orchestra.run.vm01:> sudo service rsyslog restart 2026-03-09T16:59:13.856 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T16:59:13.857 INFO:teuthology.task.internal:Starting timer... 2026-03-09T16:59:13.857 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T16:59:13.860 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T16:59:13.862 INFO:teuthology.task.selinux:Excluding vm01: VMs are not yet supported 2026-03-09T16:59:13.862 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T16:59:13.862 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T16:59:13.862 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T16:59:13.862 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T16:59:13.864 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T16:59:13.864 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T16:59:13.870 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T16:59:13.870 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventorymc51j5a5 --limit vm01.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T17:01:09.524 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm01.local')] 2026-03-09T17:01:09.524 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm01.local' 2026-03-09T17:01:09.525 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T17:01:09.586 DEBUG:teuthology.orchestra.run.vm01:> true 2026-03-09T17:01:09.816 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm01.local' 2026-03-09T17:01:09.817 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T17:01:09.820 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T17:01:09.820 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T17:01:09.820 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T17:01:09.873 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: Command line: ntpd -gq 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: ---------------------------------------------------- 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: corporation. Support and training for ntp-4 are 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: available at https://www.nwtime.org/support 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: ---------------------------------------------------- 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: proto: precision = 0.029 usec (-25) 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: basedate set to 2022-02-04 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: gps base set to 2022-02-06 (week 2196) 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T17:01:09.874 INFO:teuthology.orchestra.run.vm01.stderr: 9 Mar 17:01:09 ntpd[16071]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T17:01:09.876 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T17:01:09.876 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T17:01:09.876 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T17:01:09.876 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: Listen normally on 3 ens3 192.168.123.101:123 2026-03-09T17:01:09.876 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: Listen normally on 4 lo [::1]:123 2026-03-09T17:01:09.876 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:1%2]:123 2026-03-09T17:01:09.876 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:09 ntpd[16071]: Listening on routing socket on fd #22 for interface updates 2026-03-09T17:01:10.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:10 ntpd[16071]: Soliciting pool server 129.250.35.250 2026-03-09T17:01:11.873 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:11 ntpd[16071]: Soliciting pool server 85.10.240.253 2026-03-09T17:01:11.873 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:11 ntpd[16071]: Soliciting pool server 139.162.156.95 2026-03-09T17:01:12.873 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:12 ntpd[16071]: Soliciting pool server 31.209.85.243 2026-03-09T17:01:12.873 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:12 ntpd[16071]: Soliciting pool server 49.12.199.148 2026-03-09T17:01:12.874 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:12 ntpd[16071]: Soliciting pool server 148.251.54.81 2026-03-09T17:01:13.872 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:13 ntpd[16071]: Soliciting pool server 144.76.76.107 2026-03-09T17:01:13.872 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:13 ntpd[16071]: Soliciting pool server 168.119.211.223 2026-03-09T17:01:13.872 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:13 ntpd[16071]: Soliciting pool server 185.232.69.65 2026-03-09T17:01:13.872 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:13 ntpd[16071]: Soliciting pool server 82.165.178.31 2026-03-09T17:01:14.872 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:14 ntpd[16071]: Soliciting pool server 131.188.3.223 2026-03-09T17:01:14.872 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:14 ntpd[16071]: Soliciting pool server 85.215.189.120 2026-03-09T17:01:14.872 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:14 ntpd[16071]: Soliciting pool server 144.76.43.40 2026-03-09T17:01:14.872 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:14 ntpd[16071]: Soliciting pool server 91.189.91.157 2026-03-09T17:01:15.871 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:15 ntpd[16071]: Soliciting pool server 185.125.190.58 2026-03-09T17:01:15.871 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:15 ntpd[16071]: Soliciting pool server 185.13.148.71 2026-03-09T17:01:15.871 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:15 ntpd[16071]: Soliciting pool server 188.174.253.188 2026-03-09T17:01:19.889 INFO:teuthology.orchestra.run.vm01.stdout: 9 Mar 17:01:19 ntpd[16071]: ntpd: time slew +0.012230 s 2026-03-09T17:01:19.890 INFO:teuthology.orchestra.run.vm01.stdout:ntpd: time slew +0.012230s 2026-03-09T17:01:19.909 INFO:teuthology.orchestra.run.vm01.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T17:01:19.909 INFO:teuthology.orchestra.run.vm01.stdout:============================================================================== 2026-03-09T17:01:19.909 INFO:teuthology.orchestra.run.vm01.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:01:19.909 INFO:teuthology.orchestra.run.vm01.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:01:19.909 INFO:teuthology.orchestra.run.vm01.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:01:19.909 INFO:teuthology.orchestra.run.vm01.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:01:19.909 INFO:teuthology.orchestra.run.vm01.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:01:19.909 INFO:teuthology.run_tasks:Running task install... 2026-03-09T17:01:19.911 DEBUG:teuthology.task.install:project ceph 2026-03-09T17:01:19.911 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T17:01:19.911 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T17:01:19.911 INFO:teuthology.task.install:Using flavor: default 2026-03-09T17:01:19.913 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T17:01:19.913 INFO:teuthology.task.install:extra packages: [] 2026-03-09T17:01:19.913 DEBUG:teuthology.orchestra.run.vm01:> sudo apt-key list | grep Ceph 2026-03-09T17:01:19.989 INFO:teuthology.orchestra.run.vm01.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T17:01:20.008 INFO:teuthology.orchestra.run.vm01.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T17:01:20.008 INFO:teuthology.orchestra.run.vm01.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T17:01:20.008 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T17:01:20.008 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T17:01:20.008 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:01:20.684 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T17:01:20.684 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:01:21.228 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:01:21.228 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T17:01:21.235 DEBUG:teuthology.orchestra.run.vm01:> sudo apt-get update 2026-03-09T17:01:21.526 INFO:teuthology.orchestra.run.vm01.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T17:01:21.788 INFO:teuthology.orchestra.run.vm01.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T17:01:21.878 INFO:teuthology.orchestra.run.vm01.stdout:Ign:3 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T17:01:21.890 INFO:teuthology.orchestra.run.vm01.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T17:01:21.986 INFO:teuthology.orchestra.run.vm01.stdout:Get:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T17:01:21.992 INFO:teuthology.orchestra.run.vm01.stdout:Hit:6 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T17:01:22.094 INFO:teuthology.orchestra.run.vm01.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T17:01:22.202 INFO:teuthology.orchestra.run.vm01.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T17:01:22.274 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 25.8 kB in 1s (29.0 kB/s) 2026-03-09T17:01:22.935 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:01:22.946 DEBUG:teuthology.orchestra.run.vm01:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:01:22.978 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:01:23.138 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:01:23.139 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:01:23.251 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:01:23.251 INFO:teuthology.orchestra.run.vm01.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T17:01:23.251 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T17:01:23.251 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:01:23.251 INFO:teuthology.orchestra.run.vm01.stdout:The following additional packages will be installed: 2026-03-09T17:01:23.251 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T17:01:23.251 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T17:01:23.251 INFO:teuthology.orchestra.run.vm01.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T17:01:23.251 INFO:teuthology.orchestra.run.vm01.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T17:01:23.251 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout:Suggested packages: 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: smart-notifier mailx | mailutils 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout:Recommended packages: 2026-03-09T17:01:23.252 INFO:teuthology.orchestra.run.vm01.stdout: btrfs-tools 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout:The following NEW packages will be installed: 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T17:01:23.287 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: socat unzip xmlstarlet zip 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be upgraded: 2026-03-09T17:01:23.288 INFO:teuthology.orchestra.run.vm01.stdout: librados2 librbd1 2026-03-09T17:01:23.759 INFO:teuthology.orchestra.run.vm01.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:01:23.759 INFO:teuthology.orchestra.run.vm01.stdout:Need to get 178 MB of archives. 2026-03-09T17:01:23.759 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T17:01:23.759 INFO:teuthology.orchestra.run.vm01.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T17:01:23.869 INFO:teuthology.orchestra.run.vm01.stdout:Get:2 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T17:01:24.239 INFO:teuthology.orchestra.run.vm01.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T17:01:24.254 INFO:teuthology.orchestra.run.vm01.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T17:01:24.353 INFO:teuthology.orchestra.run.vm01.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T17:01:24.638 INFO:teuthology.orchestra.run.vm01.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T17:01:24.656 INFO:teuthology.orchestra.run.vm01.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T17:01:24.669 INFO:teuthology.orchestra.run.vm01.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T17:01:24.693 INFO:teuthology.orchestra.run.vm01.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T17:01:24.704 INFO:teuthology.orchestra.run.vm01.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T17:01:24.707 INFO:teuthology.orchestra.run.vm01.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T17:01:24.708 INFO:teuthology.orchestra.run.vm01.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T17:01:24.710 INFO:teuthology.orchestra.run.vm01.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T17:01:24.733 INFO:teuthology.orchestra.run.vm01.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T17:01:24.738 INFO:teuthology.orchestra.run.vm01.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T17:01:24.743 INFO:teuthology.orchestra.run.vm01.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T17:01:24.786 INFO:teuthology.orchestra.run.vm01.stdout:Get:17 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T17:01:24.799 INFO:teuthology.orchestra.run.vm01.stdout:Get:18 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T17:01:24.802 INFO:teuthology.orchestra.run.vm01.stdout:Get:19 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T17:01:24.802 INFO:teuthology.orchestra.run.vm01.stdout:Get:20 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T17:01:24.805 INFO:teuthology.orchestra.run.vm01.stdout:Get:21 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T17:01:24.806 INFO:teuthology.orchestra.run.vm01.stdout:Get:22 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T17:01:24.812 INFO:teuthology.orchestra.run.vm01.stdout:Get:23 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T17:01:24.841 INFO:teuthology.orchestra.run.vm01.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T17:01:24.841 INFO:teuthology.orchestra.run.vm01.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T17:01:24.844 INFO:teuthology.orchestra.run.vm01.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T17:01:24.847 INFO:teuthology.orchestra.run.vm01.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T17:01:24.849 INFO:teuthology.orchestra.run.vm01.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T17:01:24.849 INFO:teuthology.orchestra.run.vm01.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T17:01:24.850 INFO:teuthology.orchestra.run.vm01.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T17:01:24.851 INFO:teuthology.orchestra.run.vm01.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T17:01:24.949 INFO:teuthology.orchestra.run.vm01.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T17:01:24.950 INFO:teuthology.orchestra.run.vm01.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T17:01:24.950 INFO:teuthology.orchestra.run.vm01.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T17:01:24.950 INFO:teuthology.orchestra.run.vm01.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T17:01:25.051 INFO:teuthology.orchestra.run.vm01.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T17:01:25.052 INFO:teuthology.orchestra.run.vm01.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T17:01:25.054 INFO:teuthology.orchestra.run.vm01.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T17:01:25.055 INFO:teuthology.orchestra.run.vm01.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T17:01:25.055 INFO:teuthology.orchestra.run.vm01.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T17:01:25.056 INFO:teuthology.orchestra.run.vm01.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T17:01:25.125 INFO:teuthology.orchestra.run.vm01.stdout:Get:42 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T17:01:25.126 INFO:teuthology.orchestra.run.vm01.stdout:Get:43 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T17:01:25.129 INFO:teuthology.orchestra.run.vm01.stdout:Get:44 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T17:01:25.155 INFO:teuthology.orchestra.run.vm01.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T17:01:25.155 INFO:teuthology.orchestra.run.vm01.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T17:01:25.156 INFO:teuthology.orchestra.run.vm01.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T17:01:25.157 INFO:teuthology.orchestra.run.vm01.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T17:01:25.257 INFO:teuthology.orchestra.run.vm01.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T17:01:25.264 INFO:teuthology.orchestra.run.vm01.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T17:01:25.264 INFO:teuthology.orchestra.run.vm01.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T17:01:25.264 INFO:teuthology.orchestra.run.vm01.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T17:01:25.265 INFO:teuthology.orchestra.run.vm01.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T17:01:25.269 INFO:teuthology.orchestra.run.vm01.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T17:01:25.363 INFO:teuthology.orchestra.run.vm01.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T17:01:25.365 INFO:teuthology.orchestra.run.vm01.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T17:01:25.368 INFO:teuthology.orchestra.run.vm01.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T17:01:25.368 INFO:teuthology.orchestra.run.vm01.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T17:01:25.465 INFO:teuthology.orchestra.run.vm01.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T17:01:25.601 INFO:teuthology.orchestra.run.vm01.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T17:01:25.602 INFO:teuthology.orchestra.run.vm01.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T17:01:25.602 INFO:teuthology.orchestra.run.vm01.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T17:01:25.619 INFO:teuthology.orchestra.run.vm01.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T17:01:25.619 INFO:teuthology.orchestra.run.vm01.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T17:01:25.620 INFO:teuthology.orchestra.run.vm01.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T17:01:25.620 INFO:teuthology.orchestra.run.vm01.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T17:01:25.621 INFO:teuthology.orchestra.run.vm01.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T17:01:25.621 INFO:teuthology.orchestra.run.vm01.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T17:01:25.715 INFO:teuthology.orchestra.run.vm01.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T17:01:25.724 INFO:teuthology.orchestra.run.vm01.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T17:01:25.726 INFO:teuthology.orchestra.run.vm01.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T17:01:25.726 INFO:teuthology.orchestra.run.vm01.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T17:01:25.731 INFO:teuthology.orchestra.run.vm01.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T17:01:25.734 INFO:teuthology.orchestra.run.vm01.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T17:01:25.735 INFO:teuthology.orchestra.run.vm01.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T17:01:25.735 INFO:teuthology.orchestra.run.vm01.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T17:01:25.817 INFO:teuthology.orchestra.run.vm01.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T17:01:25.839 INFO:teuthology.orchestra.run.vm01.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T17:01:25.842 INFO:teuthology.orchestra.run.vm01.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T17:01:25.843 INFO:teuthology.orchestra.run.vm01.stdout:Get:80 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T17:01:25.843 INFO:teuthology.orchestra.run.vm01.stdout:Get:81 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T17:01:25.843 INFO:teuthology.orchestra.run.vm01.stdout:Get:82 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T17:01:25.845 INFO:teuthology.orchestra.run.vm01.stdout:Get:83 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T17:01:25.846 INFO:teuthology.orchestra.run.vm01.stdout:Get:84 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T17:01:25.921 INFO:teuthology.orchestra.run.vm01.stdout:Get:85 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T17:01:25.921 INFO:teuthology.orchestra.run.vm01.stdout:Get:86 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T17:01:26.024 INFO:teuthology.orchestra.run.vm01.stdout:Get:87 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T17:01:26.027 INFO:teuthology.orchestra.run.vm01.stdout:Get:88 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T17:01:26.027 INFO:teuthology.orchestra.run.vm01.stdout:Get:89 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T17:01:26.160 INFO:teuthology.orchestra.run.vm01.stdout:Get:90 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T17:01:26.424 INFO:teuthology.orchestra.run.vm01.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T17:01:26.668 INFO:teuthology.orchestra.run.vm01.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T17:01:26.755 INFO:teuthology.orchestra.run.vm01.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T17:01:26.756 INFO:teuthology.orchestra.run.vm01.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T17:01:26.768 INFO:teuthology.orchestra.run.vm01.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T17:01:27.097 INFO:teuthology.orchestra.run.vm01.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T17:01:28.129 INFO:teuthology.orchestra.run.vm01.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T17:01:28.129 INFO:teuthology.orchestra.run.vm01.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T17:01:28.224 INFO:teuthology.orchestra.run.vm01.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T17:01:28.337 INFO:teuthology.orchestra.run.vm01.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T17:01:28.347 INFO:teuthology.orchestra.run.vm01.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T17:01:28.348 INFO:teuthology.orchestra.run.vm01.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T17:01:28.460 INFO:teuthology.orchestra.run.vm01.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T17:01:28.892 INFO:teuthology.orchestra.run.vm01.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T17:01:28.892 INFO:teuthology.orchestra.run.vm01.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T17:01:31.142 INFO:teuthology.orchestra.run.vm01.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T17:01:31.142 INFO:teuthology.orchestra.run.vm01.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T17:01:31.142 INFO:teuthology.orchestra.run.vm01.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T17:01:31.710 INFO:teuthology.orchestra.run.vm01.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T17:01:32.008 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 178 MB in 8s (21.1 MB/s) 2026-03-09T17:01:32.214 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T17:01:32.239 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T17:01:32.240 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T17:01:32.242 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T17:01:32.263 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T17:01:32.267 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T17:01:32.268 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T17:01:32.284 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T17:01:32.288 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T17:01:32.299 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T17:01:32.321 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T17:01:32.324 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T17:01:32.328 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:01:32.365 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T17:01:32.370 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T17:01:32.371 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:01:32.389 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T17:01:32.394 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T17:01:32.395 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:01:32.424 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T17:01:32.429 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T17:01:32.430 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T17:01:32.455 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:32.456 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T17:01:32.531 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:32.533 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T17:01:32.600 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libnbd0. 2026-03-09T17:01:32.605 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T17:01:32.606 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T17:01:32.622 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T17:01:32.627 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:32.628 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:32.656 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rados. 2026-03-09T17:01:32.661 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:32.662 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:32.681 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T17:01:32.686 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:01:32.686 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:32.700 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T17:01:32.704 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:32.705 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:32.722 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T17:01:32.728 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:01:32.729 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:32.749 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T17:01:32.754 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T17:01:32.755 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T17:01:32.772 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T17:01:32.777 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T17:01:32.777 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T17:01:32.793 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T17:01:32.799 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:32.800 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:32.819 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T17:01:32.824 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T17:01:32.825 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T17:01:32.845 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T17:01:32.850 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T17:01:32.851 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T17:01:32.867 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T17:01:32.872 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T17:01:32.872 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T17:01:32.890 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package lua5.1. 2026-03-09T17:01:32.895 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T17:01:32.896 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T17:01:32.913 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package lua-any. 2026-03-09T17:01:32.917 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T17:01:32.918 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T17:01:32.929 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package zip. 2026-03-09T17:01:32.935 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T17:01:32.935 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T17:01:32.952 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package unzip. 2026-03-09T17:01:32.957 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T17:01:32.958 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T17:01:32.977 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package luarocks. 2026-03-09T17:01:32.982 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T17:01:32.983 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T17:01:33.030 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package librgw2. 2026-03-09T17:01:33.035 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:33.035 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:33.148 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T17:01:33.154 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:33.154 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:33.169 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T17:01:33.173 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T17:01:33.174 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T17:01:33.187 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T17:01:33.191 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:33.191 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:33.214 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-common. 2026-03-09T17:01:33.218 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:33.219 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:33.667 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-base. 2026-03-09T17:01:33.672 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:33.676 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:33.777 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T17:01:33.784 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T17:01:33.785 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T17:01:33.801 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T17:01:33.807 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T17:01:33.808 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T17:01:33.829 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T17:01:33.835 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T17:01:33.835 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T17:01:33.848 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T17:01:33.853 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T17:01:33.854 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T17:01:33.889 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T17:01:33.893 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T17:01:33.894 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T17:01:33.905 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T17:01:33.909 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T17:01:33.910 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T17:01:33.925 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-portend. 2026-03-09T17:01:33.930 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T17:01:33.930 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T17:01:33.945 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T17:01:33.952 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T17:01:33.952 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T17:01:33.966 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T17:01:33.971 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T17:01:33.971 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T17:01:34.018 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T17:01:34.023 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T17:01:34.023 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T17:01:34.040 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T17:01:34.045 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T17:01:34.046 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T17:01:34.061 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-mako. 2026-03-09T17:01:34.066 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T17:01:34.067 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T17:01:34.088 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T17:01:34.094 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T17:01:34.100 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T17:01:34.127 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T17:01:34.133 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T17:01:34.134 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T17:01:34.149 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-webob. 2026-03-09T17:01:34.156 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T17:01:34.165 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T17:01:34.198 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T17:01:34.203 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T17:01:34.205 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T17:01:34.221 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T17:01:34.226 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T17:01:34.227 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T17:01:34.242 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-paste. 2026-03-09T17:01:34.248 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T17:01:34.248 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T17:01:34.524 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T17:01:34.529 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T17:01:34.531 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T17:01:34.547 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T17:01:34.553 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T17:01:34.554 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T17:01:34.573 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T17:01:34.578 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T17:01:34.579 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T17:01:34.595 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T17:01:34.600 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T17:01:34.600 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T17:01:34.631 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T17:01:34.636 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T17:01:34.637 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T17:01:34.659 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T17:01:34.664 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:01:34.665 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:34.705 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T17:01:34.710 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:34.711 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:34.726 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T17:01:34.731 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:34.732 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:34.761 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T17:01:34.767 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:34.768 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:34.864 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T17:01:34.870 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T17:01:34.871 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T17:01:34.900 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T17:01:34.906 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:34.907 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:35.212 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph. 2026-03-09T17:01:35.218 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:35.219 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:35.237 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T17:01:35.242 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:35.243 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:35.278 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T17:01:35.283 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:35.290 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:35.352 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package cephadm. 2026-03-09T17:01:35.358 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:35.358 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:35.378 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T17:01:35.383 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T17:01:35.384 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T17:01:35.411 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T17:01:35.416 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:01:35.417 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:35.441 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T17:01:35.446 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T17:01:35.447 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T17:01:35.467 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-routes. 2026-03-09T17:01:35.473 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T17:01:35.473 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T17:01:35.499 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T17:01:35.505 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:01:35.506 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:35.888 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T17:01:35.893 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T17:01:35.894 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T17:01:35.964 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T17:01:35.970 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T17:01:35.970 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T17:01:36.003 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T17:01:36.008 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T17:01:36.009 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T17:01:36.024 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T17:01:36.029 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T17:01:36.030 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T17:01:36.161 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T17:01:36.167 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:01:36.168 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:36.479 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T17:01:36.484 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T17:01:36.486 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T17:01:36.505 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T17:01:36.511 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T17:01:36.512 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T17:01:36.534 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T17:01:36.540 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T17:01:36.541 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T17:01:36.571 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T17:01:36.577 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T17:01:36.577 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T17:01:36.616 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T17:01:36.621 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T17:01:36.622 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T17:01:36.640 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T17:01:36.644 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T17:01:36.657 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T17:01:36.810 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T17:01:36.817 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:01:36.818 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:36.838 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T17:01:36.844 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T17:01:36.845 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T17:01:36.861 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T17:01:36.867 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T17:01:36.868 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T17:01:36.885 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package jq. 2026-03-09T17:01:36.891 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T17:01:36.892 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T17:01:36.908 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package socat. 2026-03-09T17:01:36.914 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T17:01:36.916 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T17:01:36.941 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T17:01:36.947 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T17:01:36.948 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T17:01:36.999 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-test. 2026-03-09T17:01:37.005 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:37.006 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:37.837 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T17:01:37.844 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T17:01:37.845 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:37.880 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T17:01:37.885 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:37.886 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:37.903 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T17:01:37.909 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T17:01:37.910 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T17:01:37.935 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T17:01:37.941 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T17:01:37.942 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T17:01:37.963 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T17:01:37.969 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T17:01:37.970 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T17:01:38.011 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package pkg-config. 2026-03-09T17:01:38.016 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T17:01:38.017 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T17:01:38.079 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T17:01:38.084 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T17:01:38.085 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T17:01:38.136 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T17:01:38.141 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T17:01:38.141 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T17:01:38.154 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T17:01:38.159 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T17:01:38.160 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T17:01:38.177 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T17:01:38.182 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T17:01:38.183 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T17:01:38.197 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T17:01:38.201 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T17:01:38.202 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T17:01:38.225 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-py. 2026-03-09T17:01:38.230 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T17:01:38.232 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T17:01:38.259 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T17:01:38.264 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T17:01:38.264 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T17:01:38.329 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T17:01:38.335 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T17:01:38.335 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T17:01:38.349 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-toml. 2026-03-09T17:01:38.354 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T17:01:38.355 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T17:01:38.369 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T17:01:38.374 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T17:01:38.375 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T17:01:38.412 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T17:01:38.416 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T17:01:38.417 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T17:01:38.463 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T17:01:38.468 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T17:01:38.468 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T17:01:38.574 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package radosgw. 2026-03-09T17:01:38.581 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:38.582 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:39.028 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T17:01:39.035 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T17:01:39.035 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:39.063 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package smartmontools. 2026-03-09T17:01:39.065 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T17:01:39.074 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T17:01:39.116 INFO:teuthology.orchestra.run.vm01.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T17:01:39.359 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T17:01:39.359 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T17:01:39.726 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T17:01:39.790 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T17:01:39.792 INFO:teuthology.orchestra.run.vm01.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T17:01:39.854 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T17:01:40.067 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T17:01:40.464 INFO:teuthology.orchestra.run.vm01.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T17:01:40.470 INFO:teuthology.orchestra.run.vm01.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T17:01:40.484 INFO:teuthology.orchestra.run.vm01.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:40.526 INFO:teuthology.orchestra.run.vm01.stdout:Adding system user cephadm....done 2026-03-09T17:01:40.534 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T17:01:40.610 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T17:01:40.674 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T17:01:40.677 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T17:01:40.755 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T17:01:40.827 INFO:teuthology.orchestra.run.vm01.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T17:01:40.829 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T17:01:40.922 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T17:01:41.054 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T17:01:41.128 INFO:teuthology.orchestra.run.vm01.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T17:01:41.139 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T17:01:41.210 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T17:01:41.286 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:41.375 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T17:01:41.378 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T17:01:41.381 INFO:teuthology.orchestra.run.vm01.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T17:01:41.383 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T17:01:41.386 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T17:01:41.389 INFO:teuthology.orchestra.run.vm01.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T17:01:41.423 INFO:teuthology.orchestra.run.vm01.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T17:01:41.425 INFO:teuthology.orchestra.run.vm01.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T17:01:41.427 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T17:01:41.465 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T17:01:41.591 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T17:01:41.669 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T17:01:41.748 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T17:01:41.848 INFO:teuthology.orchestra.run.vm01.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T17:01:41.851 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T17:01:42.150 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T17:01:42.305 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T17:01:42.309 INFO:teuthology.orchestra.run.vm01.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T17:01:42.312 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T17:01:42.404 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T17:01:42.552 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T17:01:42.688 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T17:01:42.786 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T17:01:42.908 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T17:01:42.974 INFO:teuthology.orchestra.run.vm01.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T17:01:42.976 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:43.065 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T17:01:43.638 INFO:teuthology.orchestra.run.vm01.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T17:01:43.663 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:01:43.667 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T17:01:43.745 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T17:01:43.748 INFO:teuthology.orchestra.run.vm01.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T17:01:43.750 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T17:01:43.823 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T17:01:43.937 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:01:43.940 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T17:01:44.015 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T17:01:44.089 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T17:01:44.171 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T17:01:44.244 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T17:01:44.315 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T17:01:44.526 INFO:teuthology.orchestra.run.vm01.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T17:01:44.530 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T17:01:44.729 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T17:01:44.732 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T17:01:44.811 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T17:01:44.913 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T17:01:45.007 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T17:01:45.079 INFO:teuthology.orchestra.run.vm01.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T17:01:45.082 INFO:teuthology.orchestra.run.vm01.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T17:01:45.084 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T17:01:45.087 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T17:01:45.238 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T17:01:45.314 INFO:teuthology.orchestra.run.vm01.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T17:01:45.317 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T17:01:45.397 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:01:45.399 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T17:01:45.479 INFO:teuthology.orchestra.run.vm01.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T17:01:45.482 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T17:01:45.562 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T17:01:45.710 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T17:01:45.794 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T17:01:45.913 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T17:01:45.938 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:45.941 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:45.944 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T17:01:46.537 INFO:teuthology.orchestra.run.vm01.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T17:01:46.547 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:46.554 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:46.556 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:46.559 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:46.562 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:46.631 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T17:01:46.631 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T17:01:47.020 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:47.023 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:47.026 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:47.030 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:47.033 INFO:teuthology.orchestra.run.vm01.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:47.036 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:47.039 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:47.042 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:47.083 INFO:teuthology.orchestra.run.vm01.stdout:Adding group ceph....done 2026-03-09T17:01:47.121 INFO:teuthology.orchestra.run.vm01.stdout:Adding system user ceph....done 2026-03-09T17:01:47.130 INFO:teuthology.orchestra.run.vm01.stdout:Setting system user ceph properties....done 2026-03-09T17:01:47.134 INFO:teuthology.orchestra.run.vm01.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-09T17:01:47.204 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-09T17:01:47.441 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T17:01:47.843 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:47.846 INFO:teuthology.orchestra.run.vm01.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:48.088 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T17:01:48.088 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T17:01:48.460 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:48.548 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T17:01:48.983 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:49.047 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T17:01:49.047 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T17:01:49.424 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:49.500 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T17:01:49.500 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T17:01:49.852 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:49.936 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T17:01:49.937 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T17:01:50.286 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:50.289 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:50.302 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:50.365 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T17:01:50.365 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T17:01:50.770 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:50.784 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:50.787 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:50.800 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:01:50.950 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T17:01:50.958 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T17:01:50.979 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T17:01:51.063 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T17:01:51.413 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:01:51.413 INFO:teuthology.orchestra.run.vm01.stdout:Running kernel seems to be up-to-date. 2026-03-09T17:01:51.413 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:01:51.413 INFO:teuthology.orchestra.run.vm01.stdout:Services to be restarted: 2026-03-09T17:01:51.420 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart packagekit.service 2026-03-09T17:01:51.423 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:01:51.423 INFO:teuthology.orchestra.run.vm01.stdout:Service restarts being deferred: 2026-03-09T17:01:51.423 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T17:01:51.423 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart unattended-upgrades.service 2026-03-09T17:01:51.423 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:01:51.423 INFO:teuthology.orchestra.run.vm01.stdout:No containers need to be restarted. 2026-03-09T17:01:51.423 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:01:51.423 INFO:teuthology.orchestra.run.vm01.stdout:No user sessions are running outdated binaries. 2026-03-09T17:01:51.423 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:01:51.423 INFO:teuthology.orchestra.run.vm01.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T17:01:52.446 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:01:52.449 DEBUG:teuthology.orchestra.run.vm01:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-09T17:01:52.527 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:01:52.724 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:01:52.725 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:01:52.871 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:01:52.871 INFO:teuthology.orchestra.run.vm01.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T17:01:52.871 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T17:01:52.871 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:01:52.886 INFO:teuthology.orchestra.run.vm01.stdout:The following NEW packages will be installed: 2026-03-09T17:01:52.886 INFO:teuthology.orchestra.run.vm01.stdout: python3-jmespath python3-xmltodict 2026-03-09T17:01:53.137 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:01:53.137 INFO:teuthology.orchestra.run.vm01.stdout:Need to get 34.3 kB of archives. 2026-03-09T17:01:53.137 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-09T17:01:53.137 INFO:teuthology.orchestra.run.vm01.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T17:01:53.240 INFO:teuthology.orchestra.run.vm01.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T17:01:53.465 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 34.3 kB in 0s (96.5 kB/s) 2026-03-09T17:01:53.479 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T17:01:53.512 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T17:01:53.514 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T17:01:53.515 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T17:01:53.537 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T17:01:53.544 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T17:01:53.545 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T17:01:53.578 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T17:01:53.650 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T17:01:54.006 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:01:54.006 INFO:teuthology.orchestra.run.vm01.stdout:Running kernel seems to be up-to-date. 2026-03-09T17:01:54.006 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:01:54.006 INFO:teuthology.orchestra.run.vm01.stdout:Services to be restarted: 2026-03-09T17:01:54.014 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart packagekit.service 2026-03-09T17:01:54.017 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:01:54.017 INFO:teuthology.orchestra.run.vm01.stdout:Service restarts being deferred: 2026-03-09T17:01:54.017 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T17:01:54.017 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart unattended-upgrades.service 2026-03-09T17:01:54.017 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:01:54.018 INFO:teuthology.orchestra.run.vm01.stdout:No containers need to be restarted. 2026-03-09T17:01:54.018 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:01:54.018 INFO:teuthology.orchestra.run.vm01.stdout:No user sessions are running outdated binaries. 2026-03-09T17:01:54.018 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:01:54.018 INFO:teuthology.orchestra.run.vm01.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T17:01:54.996 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:01:55.000 DEBUG:teuthology.parallel:result is None 2026-03-09T17:01:55.000 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:01:55.664 DEBUG:teuthology.orchestra.run.vm01:> dpkg-query -W -f '${Version}' ceph 2026-03-09T17:01:55.676 INFO:teuthology.orchestra.run.vm01.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:01:55.676 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T17:01:55.677 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T17:01:55.677 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T17:01:55.677 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:01:55.677 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T17:01:55.731 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T17:01:55.731 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:01:55.731 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T17:01:55.781 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T17:01:55.832 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T17:01:55.832 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:01:55.832 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T17:01:55.884 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T17:01:55.936 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T17:01:55.936 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:01:55.936 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T17:01:55.985 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T17:01:56.036 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T17:01:56.086 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 1}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_REFRESH_FAILED'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-09T17:01:56.086 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:01:56.086 INFO:tasks.cephadm:Cluster fsid is adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:01:56.086 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T17:01:56.086 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.101'} 2026-03-09T17:01:56.086 INFO:tasks.cephadm:First mon is mon.a on vm01 2026-03-09T17:01:56.086 INFO:tasks.cephadm:First mgr is a 2026-03-09T17:01:56.086 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T17:01:56.086 DEBUG:teuthology.orchestra.run.vm01:> sudo hostname $(hostname -s) 2026-03-09T17:01:56.094 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-09T17:01:56.095 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:01:56.754 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-09T17:01:57.428 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:01:57.429 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-09T17:01:57.429 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-09T17:01:57.429 DEBUG:teuthology.orchestra.run.vm01:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T17:01:58.878 INFO:teuthology.orchestra.run.vm01.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 17:01 /home/ubuntu/cephtest/cephadm 2026-03-09T17:01:58.878 DEBUG:teuthology.orchestra.run.vm01:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T17:01:58.885 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-09T17:01:58.885 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T17:01:59.021 INFO:teuthology.orchestra.run.vm01.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T17:02:55.837 INFO:teuthology.orchestra.run.vm01.stdout:{ 2026-03-09T17:02:55.837 INFO:teuthology.orchestra.run.vm01.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T17:02:55.837 INFO:teuthology.orchestra.run.vm01.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T17:02:55.837 INFO:teuthology.orchestra.run.vm01.stdout: "repo_digests": [ 2026-03-09T17:02:55.837 INFO:teuthology.orchestra.run.vm01.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T17:02:55.837 INFO:teuthology.orchestra.run.vm01.stdout: ] 2026-03-09T17:02:55.837 INFO:teuthology.orchestra.run.vm01.stdout:} 2026-03-09T17:02:55.853 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /etc/ceph 2026-03-09T17:02:55.861 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 777 /etc/ceph 2026-03-09T17:02:55.913 INFO:tasks.cephadm:Writing seed config... 2026-03-09T17:02:55.913 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-09T17:02:55.913 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T17:02:55.913 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T17:02:55.913 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T17:02:55.913 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T17:02:55.913 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T17:02:55.913 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T17:02:55.914 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T17:02:55.914 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T17:02:55.914 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:02:55.914 DEBUG:teuthology.orchestra.run.vm01:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T17:02:55.959 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = adad5454-1bd9-11f1-a78e-99ee5fbec3ab mon election default strategy = 1 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-09T17:02:55.959 DEBUG:teuthology.orchestra.run.vm01:mon.a> sudo journalctl -f -n 0 -u ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mon.a.service 2026-03-09T17:02:56.001 DEBUG:teuthology.orchestra.run.vm01:mgr.a> sudo journalctl -f -n 0 -u ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mgr.a.service 2026-03-09T17:02:56.045 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T17:02:56.045 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.101 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T17:02:56.186 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-09T17:02:56.186 INFO:teuthology.orchestra.run.vm01.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'adad5454-1bd9-11f1-a78e-99ee5fbec3ab', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.101', '--skip-admin-label'] 2026-03-09T17:02:56.186 INFO:teuthology.orchestra.run.vm01.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T17:02:56.186 INFO:teuthology.orchestra.run.vm01.stdout:Verifying podman|docker is present... 2026-03-09T17:02:56.186 INFO:teuthology.orchestra.run.vm01.stdout:Verifying lvm2 is present... 2026-03-09T17:02:56.186 INFO:teuthology.orchestra.run.vm01.stdout:Verifying time synchronization is in place... 2026-03-09T17:02:56.190 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T17:02:56.190 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T17:02:56.193 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T17:02:56.193 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T17:02:56.195 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T17:02:56.195 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T17:02:56.197 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T17:02:56.197 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T17:02:56.200 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T17:02:56.200 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout masked 2026-03-09T17:02:56.202 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T17:02:56.202 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T17:02:56.205 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T17:02:56.205 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T17:02:56.208 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T17:02:56.208 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T17:02:56.211 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout enabled 2026-03-09T17:02:56.214 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout active 2026-03-09T17:02:56.214 INFO:teuthology.orchestra.run.vm01.stdout:Unit ntp.service is enabled and running 2026-03-09T17:02:56.214 INFO:teuthology.orchestra.run.vm01.stdout:Repeating the final host check... 2026-03-09T17:02:56.214 INFO:teuthology.orchestra.run.vm01.stdout:docker (/usr/bin/docker) is present 2026-03-09T17:02:56.214 INFO:teuthology.orchestra.run.vm01.stdout:systemctl is present 2026-03-09T17:02:56.214 INFO:teuthology.orchestra.run.vm01.stdout:lvcreate is present 2026-03-09T17:02:56.217 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T17:02:56.217 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T17:02:56.219 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T17:02:56.220 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T17:02:56.223 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T17:02:56.223 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T17:02:56.226 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T17:02:56.226 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T17:02:56.230 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T17:02:56.230 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout masked 2026-03-09T17:02:56.232 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T17:02:56.232 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T17:02:56.236 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T17:02:56.236 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T17:02:56.239 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T17:02:56.239 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-09T17:02:56.243 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout enabled 2026-03-09T17:02:56.246 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout active 2026-03-09T17:02:56.246 INFO:teuthology.orchestra.run.vm01.stdout:Unit ntp.service is enabled and running 2026-03-09T17:02:56.246 INFO:teuthology.orchestra.run.vm01.stdout:Host looks OK 2026-03-09T17:02:56.246 INFO:teuthology.orchestra.run.vm01.stdout:Cluster fsid: adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:02:56.246 INFO:teuthology.orchestra.run.vm01.stdout:Acquiring lock 140065260882480 on /run/cephadm/adad5454-1bd9-11f1-a78e-99ee5fbec3ab.lock 2026-03-09T17:02:56.246 INFO:teuthology.orchestra.run.vm01.stdout:Lock 140065260882480 acquired on /run/cephadm/adad5454-1bd9-11f1-a78e-99ee5fbec3ab.lock 2026-03-09T17:02:56.246 INFO:teuthology.orchestra.run.vm01.stdout:Verifying IP 192.168.123.101 port 3300 ... 2026-03-09T17:02:56.246 INFO:teuthology.orchestra.run.vm01.stdout:Verifying IP 192.168.123.101 port 6789 ... 2026-03-09T17:02:56.247 INFO:teuthology.orchestra.run.vm01.stdout:Base mon IP(s) is [192.168.123.101:3300, 192.168.123.101:6789], mon addrv is [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-09T17:02:56.248 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.101 metric 100 2026-03-09T17:02:56.248 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-09T17:02:56.248 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.101 metric 100 2026-03-09T17:02:56.248 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.101 metric 100 2026-03-09T17:02:56.249 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T17:02:56.249 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:1/64 scope link 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.0/24` 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.0/24` 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.1/32` 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.1/32` 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T17:02:56.252 INFO:teuthology.orchestra.run.vm01.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T17:02:57.219 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-09T17:02:57.219 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T17:02:57.219 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:02:57.219 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T17:02:57.407 INFO:teuthology.orchestra.run.vm01.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T17:02:57.407 INFO:teuthology.orchestra.run.vm01.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T17:02:57.407 INFO:teuthology.orchestra.run.vm01.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T17:02:57.596 INFO:teuthology.orchestra.run.vm01.stdout:stat: stdout 167 167 2026-03-09T17:02:57.596 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial keys... 2026-03-09T17:02:57.718 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQBB/a5pPzbiKBAAimKfpTwN9NMw1Xcvb7juwQ== 2026-03-09T17:02:57.829 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQBB/a5pX9G+LxAA6yBASuzw9bHyytgeVEGSfA== 2026-03-09T17:02:57.973 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQBB/a5prR+VNhAA0ZathdwuyPD4+oxK+YYeJA== 2026-03-09T17:02:57.973 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial monmap... 2026-03-09T17:02:58.094 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T17:02:58.094 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-09T17:02:58.094 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:02:58.094 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T17:02:58.095 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool for a [v2:192.168.123.101:3300,v1:192.168.123.101:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T17:02:58.095 INFO:teuthology.orchestra.run.vm01.stdout:setting min_mon_release = quincy 2026-03-09T17:02:58.095 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: set fsid to adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:02:58.095 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T17:02:58.095 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:02:58.095 INFO:teuthology.orchestra.run.vm01.stdout:Creating mon... 2026-03-09T17:02:58.229 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.179+0000 7ff89304bd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T17:02:58.229 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.179+0000 7ff89304bd80 1 imported monmap: 2026-03-09T17:02:58.229 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-09T17:02:58.229 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-09T17:02:58.064101+0000 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr created 2026-03-09T17:02:58.064101+0000 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.179+0000 7ff89304bd80 0 /usr/bin/ceph-mon: set fsid to adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Git sha 0 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: DB SUMMARY 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: DB Session ID: JPIRZYLU8RUGRH1D0VG9 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.create_if_missing: 1 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.env: 0x55cf26eaddc0 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.info_log: 0x55cf67586da0 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.statistics: (nil) 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.use_fsync: 0 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T17:02:58.230 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.db_log_dir: 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.wal_dir: 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.write_buffer_manager: 0x55cf6757d5e0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.unordered_write: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.row_cache: None 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.wal_filter: None 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.wal_compression: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T17:02:58.231 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_open_files: -1 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Compression algorithms supported: 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: kZSTD supported: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.merge_operator: 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_filter: None 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf67579520) 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-09T17:02:58.237 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x55cf6759f350 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compression: NoCompression 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.num_levels: 7 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T17:02:58.238 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.ttl: 2592000 2026-03-09T17:02:58.239 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.183+0000 7ff89304bd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.187+0000 7ff89304bd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.187+0000 7ff89304bd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.187+0000 7ff89304bd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1bd95b9b-1f66-4128-9dc0-028ef4617041 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.187+0000 7ff89304bd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.187+0000 7ff89304bd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55cf675a0e00 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.187+0000 7ff89304bd80 4 rocksdb: DB pointer 0x55cf67684000 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.187+0000 7ff88a7d5640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.187+0000 7ff88a7d5640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x55cf6759f350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7e-06 secs_since: 0 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.191+0000 7ff89304bd80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-09T17:02:58.241 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.191+0000 7ff89304bd80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-09T17:02:58.242 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T17:02:58.191+0000 7ff89304bd80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-09T17:02:58.242 INFO:teuthology.orchestra.run.vm01.stdout:create mon.a on 2026-03-09T17:02:58.434 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Removed /etc/systemd/system/multi-user.target.wants/ceph.target. 2026-03-09T17:02:58.616 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T17:02:58.815 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab.target → /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab.target. 2026-03-09T17:02:58.815 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab.target → /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab.target. 2026-03-09T17:02:59.020 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 systemd[1]: /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:02:59.027 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mon.a 2026-03-09T17:02:59.027 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to reset failed state of unit ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mon.a.service: Unit ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mon.a.service not loaded. 2026-03-09T17:02:59.205 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab.target.wants/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mon.a.service → /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@.service. 2026-03-09T17:02:59.212 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-09T17:02:59.212 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T17:02:59.212 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mon to start... 2026-03-09T17:02:59.212 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mon... 2026-03-09T17:02:59.276 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 systemd[1]: /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:02:59.277 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 systemd[1]: /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:02:59.277 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 systemd[1]: /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:02:59.277 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 systemd[1]: Started Ceph mon.a for adad5454-1bd9-11f1-a78e-99ee5fbec3ab. 2026-03-09T17:02:59.475 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout id: adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout services: 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.0908721s) 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout data: 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:mon is available 2026-03-09T17:02:59.476 INFO:teuthology.orchestra.run.vm01.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T17:02:59.642 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 bash[20222]: cluster 2026-03-09T17:02:59.349511+0000 mon.a (mon.0) 0 : cluster [INF] mkfs adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:02:59.642 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 bash[20222]: cluster 2026-03-09T17:02:59.349511+0000 mon.a (mon.0) 0 : cluster [INF] mkfs adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:02:59.642 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 bash[20222]: cluster 2026-03-09T17:02:59.344244+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T17:02:59.642 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 bash[20222]: cluster 2026-03-09T17:02:59.344244+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T17:02:59.689 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout fsid = adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T17:02:59.690 INFO:teuthology.orchestra.run.vm01.stdout:Generating new minimal ceph.conf... 2026-03-09T17:02:59.911 INFO:teuthology.orchestra.run.vm01.stdout:Restarting the monitor... 2026-03-09T17:03:00.042 INFO:teuthology.orchestra.run.vm01.stdout:Setting public_network to 192.168.123.0/24,192.168.123.1/32 in mon config section 2026-03-09T17:03:00.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 systemd[1]: Stopping Ceph mon.a for adad5454-1bd9-11f1-a78e-99ee5fbec3ab... 2026-03-09T17:03:00.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 bash[20222]: debug 2026-03-09T17:02:59.955+0000 7f265d94e640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T17:03:00.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 bash[20222]: debug 2026-03-09T17:02:59.955+0000 7f265d94e640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T17:03:00.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:02:59 vm01 bash[20610]: ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab-mon-a 2026-03-09T17:03:00.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 systemd[1]: ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mon.a.service: Deactivated successfully. 2026-03-09T17:03:00.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 systemd[1]: Stopped Ceph mon.a for adad5454-1bd9-11f1-a78e-99ee5fbec3ab. 2026-03-09T17:03:00.174 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 systemd[1]: Started Ceph mon.a for adad5454-1bd9-11f1-a78e-99ee5fbec3ab. 2026-03-09T17:03:00.335 INFO:teuthology.orchestra.run.vm01.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T17:03:00.336 INFO:teuthology.orchestra.run.vm01.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T17:03:00.336 INFO:teuthology.orchestra.run.vm01.stdout:Creating mgr... 2026-03-09T17:03:00.336 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T17:03:00.336 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T17:03:00.480 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.163+0000 7ff422a43d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T17:03:00.480 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.163+0000 7ff422a43d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 6 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.163+0000 7ff422a43d80 0 pidfile_write: ignore empty --pid-file 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 0 load: jerasure load: lrc 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Git sha 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: DB SUMMARY 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: DB Session ID: TQI3ZWIS17HVGT1ISCYR 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75507 ; 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.env: 0x55e258f8fdc0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.info_log: 0x55e288566700 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.db_log_dir: 2026-03-09T17:03:00.481 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.wal_dir: 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.write_buffer_manager: 0x55e28856b900 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.row_cache: None 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.wal_filter: None 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T17:03:00.482 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Compression algorithms supported: 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: kZSTD supported: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T17:03:00.483 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.merge_operator: 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e288566640) 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cache_index_and_filter_blocks: 1 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: pin_top_level_index_and_filter: 1 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: index_type: 0 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: data_block_index_type: 0 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: index_shortening: 1 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: checksum: 4 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: no_block_cache: 0 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: block_cache: 0x55e28858d350 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: block_cache_name: BinnedLRUCache 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: block_cache_options: 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: capacity : 536870912 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: num_shard_bits : 4 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: strict_capacity_limit : 0 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: high_pri_pool_ratio: 0.000 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: block_cache_compressed: (nil) 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: persistent_cache: (nil) 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: block_size: 4096 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: block_size_deviation: 10 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: block_restart_interval: 16 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: index_block_restart_interval: 1 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: metadata_block_size: 4096 2026-03-09T17:03:00.484 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: partition_filters: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: use_delta_encoding: 1 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: filter_policy: bloomfilter 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: whole_key_filtering: 1 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: verify_compression: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: read_amp_bytes_per_bit: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: format_version: 5 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: enable_index_compression: 1 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: block_align: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: max_auto_readahead_size: 262144 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: prepopulate_block_cache: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: initial_auto_readahead_size: 8192 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: num_file_reads_for_auto_readahead: 2 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.num_levels: 7 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T17:03:00.485 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T17:03:00.486 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.167+0000 7ff422a43d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.171+0000 7ff422a43d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.171+0000 7ff422a43d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.171+0000 7ff422a43d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1bd95b9b-1f66-4128-9dc0-028ef4617041 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.171+0000 7ff422a43d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773075780174543, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.171+0000 7ff422a43d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.171+0000 7ff422a43d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773075780176419, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70867, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9705, "raw_average_key_size": 49, "raw_value_size": 65346, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773075780, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bd95b9b-1f66-4128-9dc0-028ef4617041", "db_session_id": "TQI3ZWIS17HVGT1ISCYR", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.171+0000 7ff422a43d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773075780176483, "job": 1, "event": "recovery_finished"} 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.171+0000 7ff422a43d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.179+0000 7ff422a43d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.179+0000 7ff422a43d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e28858ee00 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.179+0000 7ff422a43d80 4 rocksdb: DB pointer 0x55e2886a4000 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.179+0000 7ff41880d640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: debug 2026-03-09T17:03:00.179+0000 7ff41880d640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: ** DB Stats ** 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T17:03:00.487 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: ** Compaction Stats [default] ** 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: L0 2/0 72.74 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 41.1 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Sum 2/0 72.74 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 41.1 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 41.1 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: ** Compaction Stats [default] ** 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 41.1 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Cumulative compaction: 0.00 GB write, 5.69 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Interval compaction: 0.00 GB write, 5.69 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Block cache BinnedLRUCache@0x55e28858d350#6 capacity: 512.00 MB usage: 1.06 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.3e-05 secs_since: 0 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: Block cache entry stats(count,size,portion): FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%) 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198202+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198202+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198280+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198280+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198293+0000 mon.a (mon.0) 3 : cluster [DBG] fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198293+0000 mon.a (mon.0) 3 : cluster [DBG] fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198303+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T17:02:58.064101+0000 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198303+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T17:02:58.064101+0000 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198315+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T17:02:58.064101+0000 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198315+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T17:02:58.064101+0000 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198324+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198324+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198334+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198334+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198343+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198343+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198617+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198617+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198642+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.198642+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.199377+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T17:03:00.488 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 bash[20698]: cluster 2026-03-09T17:03:00.199377+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T17:03:00.540 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mgr.a 2026-03-09T17:03:00.541 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to reset failed state of unit ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mgr.a.service: Unit ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mgr.a.service not loaded. 2026-03-09T17:03:00.756 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab.target.wants/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mgr.a.service → /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@.service. 2026-03-09T17:03:00.761 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 systemd[1]: /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:03:00.761 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:00 vm01 systemd[1]: /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:03:00.761 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:00 vm01 systemd[1]: Started Ceph mgr.a for adad5454-1bd9-11f1-a78e-99ee5fbec3ab. 2026-03-09T17:03:00.780 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-09T17:03:00.780 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T17:03:00.780 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-09T17:03:00.780 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-09T17:03:00.780 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr to start... 2026-03-09T17:03:00.780 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr... 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "adad5454-1bd9-11f1-a78e-99ee5fbec3ab", 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T17:03:01.020 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T17:03:01.021 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:01.021 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T17:03:01.021 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T17:03:01.021 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T17:03:01.021 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T17:03:01.021 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T17:03:01.021 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T17:03:01.021 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T17:03:01.021 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T17:02:59:349012+0000", 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T17:02:59.349798+0000", 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:03:01.022 INFO:teuthology.orchestra.run.vm01.stdout:mgr not available, waiting (1/15)... 2026-03-09T17:03:01.049 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:01 vm01 bash[20966]: debug 2026-03-09T17:03:00.999+0000 7f74f47c5140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T17:03:01.406 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:01 vm01 bash[20966]: debug 2026-03-09T17:03:01.043+0000 7f74f47c5140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T17:03:01.406 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:01 vm01 bash[20966]: debug 2026-03-09T17:03:01.167+0000 7f74f47c5140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T17:03:01.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:01 vm01 bash[20698]: audit 2026-03-09T17:03:00.288509+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.101:0/2615426944' entity='client.admin' 2026-03-09T17:03:01.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:01 vm01 bash[20698]: audit 2026-03-09T17:03:00.288509+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.101:0/2615426944' entity='client.admin' 2026-03-09T17:03:01.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:01 vm01 bash[20698]: audit 2026-03-09T17:03:00.974892+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.101:0/3731429176' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T17:03:01.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:01 vm01 bash[20698]: audit 2026-03-09T17:03:00.974892+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.101:0/3731429176' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T17:03:01.906 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:01 vm01 bash[20966]: debug 2026-03-09T17:03:01.479+0000 7f74f47c5140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T17:03:02.291 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:01 vm01 bash[20966]: debug 2026-03-09T17:03:01.939+0000 7f74f47c5140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T17:03:02.291 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:02 vm01 bash[20966]: debug 2026-03-09T17:03:02.023+0000 7f74f47c5140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T17:03:02.291 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:02 vm01 bash[20966]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T17:03:02.291 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:02 vm01 bash[20966]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T17:03:02.291 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:02 vm01 bash[20966]: from numpy import show_config as show_numpy_config 2026-03-09T17:03:02.291 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:02 vm01 bash[20966]: debug 2026-03-09T17:03:02.143+0000 7f74f47c5140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T17:03:02.656 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:02 vm01 bash[20966]: debug 2026-03-09T17:03:02.287+0000 7f74f47c5140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T17:03:02.656 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:02 vm01 bash[20966]: debug 2026-03-09T17:03:02.327+0000 7f74f47c5140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T17:03:02.656 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:02 vm01 bash[20966]: debug 2026-03-09T17:03:02.367+0000 7f74f47c5140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T17:03:02.656 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:02 vm01 bash[20966]: debug 2026-03-09T17:03:02.411+0000 7f74f47c5140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T17:03:02.657 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:02 vm01 bash[20966]: debug 2026-03-09T17:03:02.467+0000 7f74f47c5140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T17:03:03.214 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:02 vm01 bash[20966]: debug 2026-03-09T17:03:02.943+0000 7f74f47c5140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T17:03:03.214 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:02 vm01 bash[20966]: debug 2026-03-09T17:03:02.979+0000 7f74f47c5140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T17:03:03.214 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:03 vm01 bash[20966]: debug 2026-03-09T17:03:03.019+0000 7f74f47c5140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "adad5454-1bd9-11f1-a78e-99ee5fbec3ab", 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T17:03:03.345 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T17:03:03.346 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T17:03:03.346 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T17:03:03.346 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:03.346 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T17:03:03.346 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T17:03:03.346 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T17:02:59:349012+0000", 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T17:02:59.349798+0000", 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:03:03.347 INFO:teuthology.orchestra.run.vm01.stdout:mgr not available, waiting (2/15)... 2026-03-09T17:03:03.636 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:03 vm01 bash[20966]: debug 2026-03-09T17:03:03.211+0000 7f74f47c5140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T17:03:03.636 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:03 vm01 bash[20966]: debug 2026-03-09T17:03:03.267+0000 7f74f47c5140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T17:03:03.636 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:03 vm01 bash[20966]: debug 2026-03-09T17:03:03.319+0000 7f74f47c5140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T17:03:03.636 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:03 vm01 bash[20966]: debug 2026-03-09T17:03:03.443+0000 7f74f47c5140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:03:03.636 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:03 vm01 bash[20698]: audit 2026-03-09T17:03:03.275287+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.101:0/2415917642' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T17:03:03.637 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:03 vm01 bash[20698]: audit 2026-03-09T17:03:03.275287+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.101:0/2415917642' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T17:03:03.906 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:03 vm01 bash[20966]: debug 2026-03-09T17:03:03.631+0000 7f74f47c5140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T17:03:03.907 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:03 vm01 bash[20966]: debug 2026-03-09T17:03:03.823+0000 7f74f47c5140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T17:03:03.907 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:03 vm01 bash[20966]: debug 2026-03-09T17:03:03.859+0000 7f74f47c5140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T17:03:04.406 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:03 vm01 bash[20966]: debug 2026-03-09T17:03:03.907+0000 7f74f47c5140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T17:03:04.406 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20966]: debug 2026-03-09T17:03:04.131+0000 7f74f47c5140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:03:04.906 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20966]: debug 2026-03-09T17:03:04.499+0000 7f74f47c5140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T17:03:04.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: cluster 2026-03-09T17:03:04.506730+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-09T17:03:04.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: cluster 2026-03-09T17:03:04.506730+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-09T17:03:04.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: cluster 2026-03-09T17:03:04.512016+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.00545198s) 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: cluster 2026-03-09T17:03:04.512016+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.00545198s) 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.515766+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.515766+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.516122+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.516122+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.516491+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.516491+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.517483+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.517483+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.518503+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.518503+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: cluster 2026-03-09T17:03:04.527724+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon a is now available 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: cluster 2026-03-09T17:03:04.527724+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon a is now available 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.541036+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.541036+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.541368+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.541368+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.544060+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.544060+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.544860+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.544860+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.547120+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' 2026-03-09T17:03:04.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:04 vm01 bash[20698]: audit 2026-03-09T17:03:04.547120+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.101:0/1411322105' entity='mgr.a' 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "adad5454-1bd9-11f1-a78e-99ee5fbec3ab", 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T17:03:05.744 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T17:02:59:349012+0000", 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:05.745 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T17:02:59.349798+0000", 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:03:05.746 INFO:teuthology.orchestra.run.vm01.stdout:mgr is available 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout fsid = adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T17:03:06.038 INFO:teuthology.orchestra.run.vm01.stdout:Enabling cephadm module... 2026-03-09T17:03:06.629 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20966]: ignoring --setuser ceph since I am not root 2026-03-09T17:03:06.629 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20966]: ignoring --setgroup ceph since I am not root 2026-03-09T17:03:06.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20698]: cluster 2026-03-09T17:03:05.518698+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: a(active, since 1.01215s) 2026-03-09T17:03:06.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20698]: cluster 2026-03-09T17:03:05.518698+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: a(active, since 1.01215s) 2026-03-09T17:03:06.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20698]: audit 2026-03-09T17:03:05.704572+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.101:0/2150882370' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T17:03:06.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20698]: audit 2026-03-09T17:03:05.704572+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.101:0/2150882370' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T17:03:06.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20698]: audit 2026-03-09T17:03:05.994963+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.101:0/2475555060' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T17:03:06.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20698]: audit 2026-03-09T17:03:05.994963+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.101:0/2475555060' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T17:03:06.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20698]: audit 2026-03-09T17:03:06.301344+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.101:0/2872564156' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T17:03:06.629 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20698]: audit 2026-03-09T17:03:06.301344+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.101:0/2872564156' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T17:03:06.890 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20966]: debug 2026-03-09T17:03:06.687+0000 7f9ef820c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T17:03:06.890 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20966]: debug 2026-03-09T17:03:06.727+0000 7f9ef820c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T17:03:06.890 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:06 vm01 bash[20966]: debug 2026-03-09T17:03:06.847+0000 7f9ef820c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T17:03:06.914 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:03:06.914 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-09T17:03:06.914 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T17:03:06.914 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-09T17:03:06.914 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T17:03:06.914 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:03:06.914 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for the mgr to restart... 2026-03-09T17:03:06.914 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr epoch 4... 2026-03-09T17:03:07.524 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20966]: debug 2026-03-09T17:03:07.191+0000 7f9ef820c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T17:03:07.869 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20966]: debug 2026-03-09T17:03:07.659+0000 7f9ef820c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T17:03:07.869 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20966]: debug 2026-03-09T17:03:07.743+0000 7f9ef820c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T17:03:07.869 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20698]: audit 2026-03-09T17:03:06.522294+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.101:0/2872564156' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T17:03:07.869 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20698]: audit 2026-03-09T17:03:06.522294+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.101:0/2872564156' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T17:03:07.869 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20698]: cluster 2026-03-09T17:03:06.525414+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-09T17:03:07.870 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20698]: cluster 2026-03-09T17:03:06.525414+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-09T17:03:07.870 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20698]: audit 2026-03-09T17:03:06.852731+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.101:0/2752324907' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T17:03:07.870 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20698]: audit 2026-03-09T17:03:06.852731+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.101:0/2752324907' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T17:03:08.146 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20966]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T17:03:08.146 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20966]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T17:03:08.146 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20966]: from numpy import show_config as show_numpy_config 2026-03-09T17:03:08.146 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:07 vm01 bash[20966]: debug 2026-03-09T17:03:07.871+0000 7f9ef820c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T17:03:08.146 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:08 vm01 bash[20966]: debug 2026-03-09T17:03:08.019+0000 7f9ef820c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T17:03:08.146 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:08 vm01 bash[20966]: debug 2026-03-09T17:03:08.055+0000 7f9ef820c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T17:03:08.146 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:08 vm01 bash[20966]: debug 2026-03-09T17:03:08.095+0000 7f9ef820c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T17:03:08.406 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:08 vm01 bash[20966]: debug 2026-03-09T17:03:08.143+0000 7f9ef820c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T17:03:08.406 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:08 vm01 bash[20966]: debug 2026-03-09T17:03:08.199+0000 7f9ef820c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T17:03:08.935 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:08 vm01 bash[20966]: debug 2026-03-09T17:03:08.663+0000 7f9ef820c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T17:03:08.935 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:08 vm01 bash[20966]: debug 2026-03-09T17:03:08.703+0000 7f9ef820c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T17:03:08.935 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:08 vm01 bash[20966]: debug 2026-03-09T17:03:08.743+0000 7f9ef820c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T17:03:08.935 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:08 vm01 bash[20966]: debug 2026-03-09T17:03:08.891+0000 7f9ef820c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T17:03:09.252 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:08 vm01 bash[20966]: debug 2026-03-09T17:03:08.931+0000 7f9ef820c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T17:03:09.252 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:08 vm01 bash[20966]: debug 2026-03-09T17:03:08.971+0000 7f9ef820c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T17:03:09.252 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20966]: debug 2026-03-09T17:03:09.087+0000 7f9ef820c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:03:09.509 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20966]: debug 2026-03-09T17:03:09.247+0000 7f9ef820c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T17:03:09.510 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20966]: debug 2026-03-09T17:03:09.427+0000 7f9ef820c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T17:03:09.510 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20966]: debug 2026-03-09T17:03:09.463+0000 7f9ef820c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T17:03:09.904 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20966]: debug 2026-03-09T17:03:09.507+0000 7f9ef820c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T17:03:09.904 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20966]: debug 2026-03-09T17:03:09.655+0000 7f9ef820c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:03:10.156 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20966]: debug 2026-03-09T17:03:09.899+0000 7f9ef820c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T17:03:10.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: cluster 2026-03-09T17:03:09.906940+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon a restarted 2026-03-09T17:03:10.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: cluster 2026-03-09T17:03:09.906940+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon a restarted 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: cluster 2026-03-09T17:03:09.907202+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon a 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: cluster 2026-03-09T17:03:09.907202+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon a 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: cluster 2026-03-09T17:03:09.912072+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: cluster 2026-03-09T17:03:09.912072+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: cluster 2026-03-09T17:03:09.912202+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00511222s) 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: cluster 2026-03-09T17:03:09.912202+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00511222s) 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.914498+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.914498+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.915465+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.915465+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.915961+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.915961+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.916049+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.916049+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.916128+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.916128+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: cluster 2026-03-09T17:03:09.920622+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon a is now available 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: cluster 2026-03-09T17:03:09.920622+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon a is now available 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.929732+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.929732+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.934085+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.934085+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.947333+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.947333+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.949466+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.949466+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.951321+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.951321+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.958305+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T17:03:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:09 vm01 bash[20698]: audit 2026-03-09T17:03:09.958305+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T17:03:10.980 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:03:10.980 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-09T17:03:10.980 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T17:03:10.980 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:03:10.980 INFO:teuthology.orchestra.run.vm01.stdout:mgr epoch 4 is available 2026-03-09T17:03:10.980 INFO:teuthology.orchestra.run.vm01.stdout:Setting orchestrator backend to cephadm... 2026-03-09T17:03:11.632 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20698]: cephadm 2026-03-09T17:03:09.926950+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T17:03:11.632 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20698]: cephadm 2026-03-09T17:03:09.926950+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T17:03:11.632 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20698]: audit 2026-03-09T17:03:10.343365+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:11.632 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20698]: audit 2026-03-09T17:03:10.343365+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:11.632 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20698]: audit 2026-03-09T17:03:10.346008+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:11.632 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20698]: audit 2026-03-09T17:03:10.346008+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:11.632 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20698]: cluster 2026-03-09T17:03:10.916242+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: a(active, since 1.00915s) 2026-03-09T17:03:11.632 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20698]: cluster 2026-03-09T17:03:10.916242+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: a(active, since 1.00915s) 2026-03-09T17:03:11.632 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20698]: audit 2026-03-09T17:03:11.316931+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:11.632 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20698]: audit 2026-03-09T17:03:11.316931+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:11.632 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20698]: audit 2026-03-09T17:03:11.324448+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:11.632 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20698]: audit 2026-03-09T17:03:11.324448+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:11.660 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T17:03:11.660 INFO:teuthology.orchestra.run.vm01.stdout:Generating ssh key... 2026-03-09T17:03:12.238 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: Generating public/private ed25519 key pair. 2026-03-09T17:03:12.238 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: Your identification has been saved in /tmp/tmpzjlenpun/key 2026-03-09T17:03:12.238 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: Your public key has been saved in /tmp/tmpzjlenpun/key.pub 2026-03-09T17:03:12.238 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: The key fingerprint is: 2026-03-09T17:03:12.238 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: SHA256:Zca53XKwJlFvtXBhbZoR74BS7Uz08N7lkU3JUKR8m58 ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:03:12.238 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: The key's randomart image is: 2026-03-09T17:03:12.238 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: +--[ED25519 256]--+ 2026-03-09T17:03:12.238 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: | o=OB*| 2026-03-09T17:03:12.238 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: | . +.oOX*| 2026-03-09T17:03:12.239 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: | O o*+X*| 2026-03-09T17:03:12.239 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: | + = =*=*| 2026-03-09T17:03:12.239 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: | S o = oo=| 2026-03-09T17:03:12.239 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: | o o o| 2026-03-09T17:03:12.239 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: | E.| 2026-03-09T17:03:12.239 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: | | 2026-03-09T17:03:12.239 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: | | 2026-03-09T17:03:12.239 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:11 vm01 bash[20966]: +----[SHA256]-----+ 2026-03-09T17:03:12.268 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAY3BHx+olbClZgmdoHmTc+tqD0teznOEvDN66ZffP29 ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:03:12.268 INFO:teuthology.orchestra.run.vm01.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T17:03:12.268 INFO:teuthology.orchestra.run.vm01.stdout:Adding key to root@localhost authorized_keys... 2026-03-09T17:03:12.268 INFO:teuthology.orchestra.run.vm01.stdout:Adding host vm01... 2026-03-09T17:03:12.530 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: audit 2026-03-09T17:03:10.918339+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: audit 2026-03-09T17:03:10.918339+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: audit 2026-03-09T17:03:10.922280+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: audit 2026-03-09T17:03:10.922280+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: cephadm 2026-03-09T17:03:11.192928+0000 mgr.a (mgr.14118) 4 : cephadm [INF] [09/Mar/2026:17:03:11] ENGINE Bus STARTING 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: cephadm 2026-03-09T17:03:11.192928+0000 mgr.a (mgr.14118) 4 : cephadm [INF] [09/Mar/2026:17:03:11] ENGINE Bus STARTING 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: cephadm 2026-03-09T17:03:11.302894+0000 mgr.a (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:17:03:11] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: cephadm 2026-03-09T17:03:11.302894+0000 mgr.a (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:17:03:11] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: cephadm 2026-03-09T17:03:11.303874+0000 mgr.a (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:17:03:11] ENGINE Client ('192.168.123.101', 40088) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: cephadm 2026-03-09T17:03:11.303874+0000 mgr.a (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:17:03:11] ENGINE Client ('192.168.123.101', 40088) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: audit 2026-03-09T17:03:11.313253+0000 mgr.a (mgr.14118) 7 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: audit 2026-03-09T17:03:11.313253+0000 mgr.a (mgr.14118) 7 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: cephadm 2026-03-09T17:03:11.403970+0000 mgr.a (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:17:03:11] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: cephadm 2026-03-09T17:03:11.403970+0000 mgr.a (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:17:03:11] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: cephadm 2026-03-09T17:03:11.404179+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:17:03:11] ENGINE Bus STARTED 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: cephadm 2026-03-09T17:03:11.404179+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:17:03:11] ENGINE Bus STARTED 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: audit 2026-03-09T17:03:11.404749+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: audit 2026-03-09T17:03:11.404749+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: audit 2026-03-09T17:03:11.918689+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: audit 2026-03-09T17:03:11.918689+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: audit 2026-03-09T17:03:11.921336+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:12.531 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:12 vm01 bash[20698]: audit 2026-03-09T17:03:11.921336+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:13.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:13 vm01 bash[20698]: audit 2026-03-09T17:03:11.619923+0000 mgr.a (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:13.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:13 vm01 bash[20698]: audit 2026-03-09T17:03:11.619923+0000 mgr.a (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:13.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:13 vm01 bash[20698]: audit 2026-03-09T17:03:11.895407+0000 mgr.a (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:13.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:13 vm01 bash[20698]: audit 2026-03-09T17:03:11.895407+0000 mgr.a (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:13.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:13 vm01 bash[20698]: cephadm 2026-03-09T17:03:11.895640+0000 mgr.a (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T17:03:13.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:13 vm01 bash[20698]: cephadm 2026-03-09T17:03:11.895640+0000 mgr.a (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T17:03:13.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:13 vm01 bash[20698]: audit 2026-03-09T17:03:12.228116+0000 mgr.a (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:13.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:13 vm01 bash[20698]: audit 2026-03-09T17:03:12.228116+0000 mgr.a (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:13.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:13 vm01 bash[20698]: cluster 2026-03-09T17:03:12.353091+0000 mon.a (mon.0) 59 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-09T17:03:13.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:13 vm01 bash[20698]: cluster 2026-03-09T17:03:12.353091+0000 mon.a (mon.0) 59 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-09T17:03:13.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:13 vm01 bash[20698]: audit 2026-03-09T17:03:12.520414+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:13.514 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:13 vm01 bash[20698]: audit 2026-03-09T17:03:12.520414+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:14.357 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:14 vm01 bash[20698]: cephadm 2026-03-09T17:03:13.243537+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm01 2026-03-09T17:03:14.357 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:14 vm01 bash[20698]: cephadm 2026-03-09T17:03:13.243537+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm01 2026-03-09T17:03:14.661 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Added host 'vm01' with addr '192.168.123.101' 2026-03-09T17:03:14.661 INFO:teuthology.orchestra.run.vm01.stdout:Deploying unmanaged mon service... 2026-03-09T17:03:15.082 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T17:03:15.082 INFO:teuthology.orchestra.run.vm01.stdout:Deploying unmanaged mgr service... 2026-03-09T17:03:15.374 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:14.604677+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:14.604677+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: cephadm 2026-03-09T17:03:14.605185+0000 mgr.a (mgr.14118) 16 : cephadm [INF] Added host vm01 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: cephadm 2026-03-09T17:03:14.605185+0000 mgr.a (mgr.14118) 16 : cephadm [INF] Added host vm01 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:14.605789+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:14.605789+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:14.923152+0000 mgr.a (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:14.923152+0000 mgr.a (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: cephadm 2026-03-09T17:03:14.924276+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: cephadm 2026-03-09T17:03:14.924276+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:14.928043+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:14.928043+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:15.327911+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:15.327911+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: cephadm 2026-03-09T17:03:15.328762+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: cephadm 2026-03-09T17:03:15.328762+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:15.331882+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:15.331882+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:15.594132+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.101:0/2871232502' entity='client.admin' 2026-03-09T17:03:15.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:15 vm01 bash[20698]: audit 2026-03-09T17:03:15.594132+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.101:0/2871232502' entity='client.admin' 2026-03-09T17:03:15.936 INFO:teuthology.orchestra.run.vm01.stdout:Enabling the dashboard module... 2026-03-09T17:03:17.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:16 vm01 bash[20698]: audit 2026-03-09T17:03:15.892958+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.101:0/3259129940' entity='client.admin' 2026-03-09T17:03:17.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:16 vm01 bash[20698]: audit 2026-03-09T17:03:15.892958+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.101:0/3259129940' entity='client.admin' 2026-03-09T17:03:17.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:16 vm01 bash[20698]: audit 2026-03-09T17:03:16.199453+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:17.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:16 vm01 bash[20698]: audit 2026-03-09T17:03:16.199453+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:17.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:16 vm01 bash[20698]: audit 2026-03-09T17:03:16.266489+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.101:0/3747583010' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T17:03:17.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:16 vm01 bash[20698]: audit 2026-03-09T17:03:16.266489+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.101:0/3747583010' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T17:03:17.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:16 vm01 bash[20698]: audit 2026-03-09T17:03:16.499092+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:17.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:16 vm01 bash[20698]: audit 2026-03-09T17:03:16.499092+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.101:0/2733119624' entity='mgr.a' 2026-03-09T17:03:17.521 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:17 vm01 bash[20966]: ignoring --setuser ceph since I am not root 2026-03-09T17:03:17.521 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:17 vm01 bash[20966]: ignoring --setgroup ceph since I am not root 2026-03-09T17:03:17.521 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:17 vm01 bash[20966]: debug 2026-03-09T17:03:17.335+0000 7f748d830140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T17:03:17.521 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:17 vm01 bash[20966]: debug 2026-03-09T17:03:17.375+0000 7f748d830140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T17:03:17.657 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:03:17.657 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-09T17:03:17.657 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T17:03:17.657 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-09T17:03:17.657 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T17:03:17.657 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:03:17.657 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for the mgr to restart... 2026-03-09T17:03:17.658 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr epoch 8... 2026-03-09T17:03:17.851 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:17 vm01 bash[20966]: debug 2026-03-09T17:03:17.515+0000 7f748d830140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T17:03:18.156 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:17 vm01 bash[20966]: debug 2026-03-09T17:03:17.847+0000 7f748d830140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T17:03:18.517 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20966]: debug 2026-03-09T17:03:18.303+0000 7f748d830140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T17:03:18.517 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20966]: debug 2026-03-09T17:03:18.387+0000 7f748d830140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T17:03:18.517 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20698]: audit 2026-03-09T17:03:17.201072+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.101:0/3747583010' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T17:03:18.517 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20698]: audit 2026-03-09T17:03:17.201072+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.101:0/3747583010' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T17:03:18.517 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20698]: cluster 2026-03-09T17:03:17.204643+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: a(active, since 7s) 2026-03-09T17:03:18.517 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20698]: cluster 2026-03-09T17:03:17.204643+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: a(active, since 7s) 2026-03-09T17:03:18.517 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20698]: audit 2026-03-09T17:03:17.605525+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.101:0/1289848816' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T17:03:18.517 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20698]: audit 2026-03-09T17:03:17.605525+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.101:0/1289848816' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T17:03:18.799 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20966]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T17:03:18.799 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20966]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T17:03:18.799 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20966]: from numpy import show_config as show_numpy_config 2026-03-09T17:03:18.799 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20966]: debug 2026-03-09T17:03:18.519+0000 7f748d830140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T17:03:18.799 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20966]: debug 2026-03-09T17:03:18.667+0000 7f748d830140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T17:03:18.799 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20966]: debug 2026-03-09T17:03:18.715+0000 7f748d830140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T17:03:18.799 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20966]: debug 2026-03-09T17:03:18.751+0000 7f748d830140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T17:03:19.156 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20966]: debug 2026-03-09T17:03:18.795+0000 7f748d830140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T17:03:19.157 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:18 vm01 bash[20966]: debug 2026-03-09T17:03:18.847+0000 7f748d830140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T17:03:19.558 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:19 vm01 bash[20966]: debug 2026-03-09T17:03:19.295+0000 7f748d830140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T17:03:19.558 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:19 vm01 bash[20966]: debug 2026-03-09T17:03:19.335+0000 7f748d830140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T17:03:19.558 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:19 vm01 bash[20966]: debug 2026-03-09T17:03:19.371+0000 7f748d830140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T17:03:19.906 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:19 vm01 bash[20966]: debug 2026-03-09T17:03:19.555+0000 7f748d830140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T17:03:19.906 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:19 vm01 bash[20966]: debug 2026-03-09T17:03:19.599+0000 7f748d830140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T17:03:19.907 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:19 vm01 bash[20966]: debug 2026-03-09T17:03:19.643+0000 7f748d830140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T17:03:19.907 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:19 vm01 bash[20966]: debug 2026-03-09T17:03:19.755+0000 7f748d830140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:03:20.187 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:19 vm01 bash[20966]: debug 2026-03-09T17:03:19.927+0000 7f748d830140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T17:03:20.187 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20966]: debug 2026-03-09T17:03:20.103+0000 7f748d830140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T17:03:20.187 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20966]: debug 2026-03-09T17:03:20.139+0000 7f748d830140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T17:03:20.589 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20966]: debug 2026-03-09T17:03:20.183+0000 7f748d830140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T17:03:20.589 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20966]: debug 2026-03-09T17:03:20.335+0000 7f748d830140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T17:03:20.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: cluster 2026-03-09T17:03:20.590382+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon a restarted 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: cluster 2026-03-09T17:03:20.590382+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon a restarted 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: cluster 2026-03-09T17:03:20.590807+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon a 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: cluster 2026-03-09T17:03:20.590807+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon a 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: cluster 2026-03-09T17:03:20.596146+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: cluster 2026-03-09T17:03:20.596146+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: cluster 2026-03-09T17:03:20.596298+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00559327s) 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: cluster 2026-03-09T17:03:20.596298+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00559327s) 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.599026+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.599026+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.599750+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.599750+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.601000+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.601000+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.601377+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.601377+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.601696+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.601696+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: cluster 2026-03-09T17:03:20.608227+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon a is now available 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: cluster 2026-03-09T17:03:20.608227+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon a is now available 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.629126+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.629126+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.637672+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.637672+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.638951+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20698]: audit 2026-03-09T17:03:20.638951+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T17:03:20.907 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:03:20 vm01 bash[20966]: debug 2026-03-09T17:03:20.583+0000 7f748d830140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T17:03:21.650 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-09T17:03:21.650 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-09T17:03:21.650 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T17:03:21.650 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-09T17:03:21.650 INFO:teuthology.orchestra.run.vm01.stdout:mgr epoch 8 is available 2026-03-09T17:03:21.650 INFO:teuthology.orchestra.run.vm01.stdout:Generating a dashboard self-signed certificate... 2026-03-09T17:03:21.956 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T17:03:21.956 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial admin user... 2026-03-09T17:03:22.399 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$MEN87CT9p7j80sq569bZL.QjKnYaxVdmmsw0Loy3Bhog0z/rGTTlq", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773075802, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T17:03:22.399 INFO:teuthology.orchestra.run.vm01.stdout:Fetching dashboard port number... 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: cluster 2026-03-09T17:03:21.599836+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: a(active, since 1.00913s) 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: cluster 2026-03-09T17:03:21.599836+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: a(active, since 1.00913s) 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: cephadm 2026-03-09T17:03:21.659548+0000 mgr.a (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:17:03:21] ENGINE Bus STARTING 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: cephadm 2026-03-09T17:03:21.659548+0000 mgr.a (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:17:03:21] ENGINE Bus STARTING 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: cephadm 2026-03-09T17:03:21.774039+0000 mgr.a (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:17:03:21] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: cephadm 2026-03-09T17:03:21.774039+0000 mgr.a (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:17:03:21] ENGINE Serving on https://192.168.123.101:7150 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: cephadm 2026-03-09T17:03:21.774424+0000 mgr.a (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:17:03:21] ENGINE Client ('192.168.123.101', 55044) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: cephadm 2026-03-09T17:03:21.774424+0000 mgr.a (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:17:03:21] ENGINE Client ('192.168.123.101', 55044) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: cephadm 2026-03-09T17:03:21.877590+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:17:03:21] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: cephadm 2026-03-09T17:03:21.877590+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:17:03:21] ENGINE Serving on http://192.168.123.101:8765 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: cephadm 2026-03-09T17:03:21.877633+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:17:03:21] ENGINE Bus STARTED 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: cephadm 2026-03-09T17:03:21.877633+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:17:03:21] ENGINE Bus STARTED 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: audit 2026-03-09T17:03:21.880409+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: audit 2026-03-09T17:03:21.880409+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: audit 2026-03-09T17:03:21.908981+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: audit 2026-03-09T17:03:21.908981+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: audit 2026-03-09T17:03:21.912028+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: audit 2026-03-09T17:03:21.912028+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: audit 2026-03-09T17:03:22.201327+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: audit 2026-03-09T17:03:22.201327+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: audit 2026-03-09T17:03:22.356139+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:22.641 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:22 vm01 bash[20698]: audit 2026-03-09T17:03:22.356139+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:22.672 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T17:03:22.672 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-09T17:03:22.672 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T17:03:22.672 INFO:teuthology.orchestra.run.vm01.stdout:Ceph Dashboard is now available at: 2026-03-09T17:03:22.672 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:03:22.673 INFO:teuthology.orchestra.run.vm01.stdout: URL: https://vm01.local:8443/ 2026-03-09T17:03:22.673 INFO:teuthology.orchestra.run.vm01.stdout: User: admin 2026-03-09T17:03:22.673 INFO:teuthology.orchestra.run.vm01.stdout: Password: nciriwl4ia 2026-03-09T17:03:22.673 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:03:22.673 INFO:teuthology.orchestra.run.vm01.stdout:Saving cluster configuration to /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/config directory 2026-03-09T17:03:23.015 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T17:03:23.015 INFO:teuthology.orchestra.run.vm01.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T17:03:23.015 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:03:23.015 INFO:teuthology.orchestra.run.vm01.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T17:03:23.015 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:03:23.015 INFO:teuthology.orchestra.run.vm01.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T17:03:23.015 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:03:23.015 INFO:teuthology.orchestra.run.vm01.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-09T17:03:23.015 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:03:23.015 INFO:teuthology.orchestra.run.vm01.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T17:03:23.015 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:03:23.015 INFO:teuthology.orchestra.run.vm01.stdout: ceph telemetry on 2026-03-09T17:03:23.016 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:03:23.016 INFO:teuthology.orchestra.run.vm01.stdout:For more information see: 2026-03-09T17:03:23.016 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:03:23.016 INFO:teuthology.orchestra.run.vm01.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T17:03:23.016 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:03:23.016 INFO:teuthology.orchestra.run.vm01.stdout:Bootstrap complete. 2026-03-09T17:03:23.036 INFO:tasks.cephadm:Fetching config... 2026-03-09T17:03:23.036 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:03:23.036 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T17:03:23.038 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T17:03:23.038 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:03:23.038 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T17:03:23.083 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T17:03:23.083 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:03:23.083 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/keyring of=/dev/stdout 2026-03-09T17:03:23.132 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T17:03:23.132 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:03:23.132 DEBUG:teuthology.orchestra.run.vm01:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T17:03:23.175 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T17:03:23.175 DEBUG:teuthology.orchestra.run.vm01:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAY3BHx+olbClZgmdoHmTc+tqD0teznOEvDN66ZffP29 ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T17:03:23.227 INFO:teuthology.orchestra.run.vm01.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAY3BHx+olbClZgmdoHmTc+tqD0teznOEvDN66ZffP29 ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:03:23.232 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T17:03:23.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:23 vm01 bash[20698]: audit 2026-03-09T17:03:22.629851+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.101:0/1295814183' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T17:03:23.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:23 vm01 bash[20698]: audit 2026-03-09T17:03:22.629851+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.101:0/1295814183' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T17:03:23.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:23 vm01 bash[20698]: audit 2026-03-09T17:03:22.974643+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.101:0/153993786' entity='client.admin' 2026-03-09T17:03:23.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:23 vm01 bash[20698]: audit 2026-03-09T17:03:22.974643+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.101:0/153993786' entity='client.admin' 2026-03-09T17:03:23.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:23 vm01 bash[20698]: cluster 2026-03-09T17:03:23.362829+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-09T17:03:23.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:23 vm01 bash[20698]: cluster 2026-03-09T17:03:23.362829+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-09T17:03:26.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:26 vm01 bash[20698]: audit 2026-03-09T17:03:25.451702+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:26.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:26 vm01 bash[20698]: audit 2026-03-09T17:03:25.451702+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:26.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:26 vm01 bash[20698]: audit 2026-03-09T17:03:26.097766+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:26.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:26 vm01 bash[20698]: audit 2026-03-09T17:03:26.097766+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:27.292 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:03:27.609 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T17:03:27.609 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T17:03:28.656 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:28 vm01 bash[20698]: cluster 2026-03-09T17:03:27.458339+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-09T17:03:28.657 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:28 vm01 bash[20698]: cluster 2026-03-09T17:03:27.458339+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-09T17:03:28.657 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:28 vm01 bash[20698]: audit 2026-03-09T17:03:27.549888+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.101:0/2310464461' entity='client.admin' 2026-03-09T17:03:28.657 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:28 vm01 bash[20698]: audit 2026-03-09T17:03:27.549888+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.101:0/2310464461' entity='client.admin' 2026-03-09T17:03:32.305 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:03:32.654 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T17:03:32.654 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph osd crush tunables default 2026-03-09T17:03:32.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:31.856942+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:31.856942+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:31.859637+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:31.859637+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:31.860329+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:31.860329+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:31.863303+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:31.863303+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:31.868813+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:31.868813+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:31.872182+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:31.872182+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.558082+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.558082+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.558847+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.558847+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.560048+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.560048+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.560647+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.560647+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.718576+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.718576+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.721323+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.721323+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.723811+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:32.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:32 vm01 bash[20698]: audit 2026-03-09T17:03:32.723811+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:34.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:33 vm01 bash[20698]: audit 2026-03-09T17:03:32.554995+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:34.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:33 vm01 bash[20698]: audit 2026-03-09T17:03:32.554995+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:34.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:33 vm01 bash[20698]: cephadm 2026-03-09T17:03:32.561416+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T17:03:34.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:33 vm01 bash[20698]: cephadm 2026-03-09T17:03:32.561416+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm01:/etc/ceph/ceph.conf 2026-03-09T17:03:34.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:33 vm01 bash[20698]: cephadm 2026-03-09T17:03:32.602239+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm01:/var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/config/ceph.conf 2026-03-09T17:03:34.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:33 vm01 bash[20698]: cephadm 2026-03-09T17:03:32.602239+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm01:/var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/config/ceph.conf 2026-03-09T17:03:34.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:33 vm01 bash[20698]: cephadm 2026-03-09T17:03:32.639132+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:03:34.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:33 vm01 bash[20698]: cephadm 2026-03-09T17:03:32.639132+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:03:34.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:33 vm01 bash[20698]: cephadm 2026-03-09T17:03:32.680599+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm01:/var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/config/ceph.client.admin.keyring 2026-03-09T17:03:34.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:33 vm01 bash[20698]: cephadm 2026-03-09T17:03:32.680599+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm01:/var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/config/ceph.client.admin.keyring 2026-03-09T17:03:36.314 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:03:36.625 INFO:teuthology.orchestra.run.vm01.stderr:adjusted tunables profile to default 2026-03-09T17:03:36.685 INFO:tasks.cephadm:Adding mon.a on vm01 2026-03-09T17:03:36.685 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph orch apply mon '1;vm01:192.168.123.101=a' 2026-03-09T17:03:36.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:36 vm01 bash[20698]: audit 2026-03-09T17:03:36.572772+0000 mon.a (mon.0) 109 : audit [INF] from='client.? 192.168.123.101:0/1729850514' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T17:03:36.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:36 vm01 bash[20698]: audit 2026-03-09T17:03:36.572772+0000 mon.a (mon.0) 109 : audit [INF] from='client.? 192.168.123.101:0/1729850514' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T17:03:37.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:37 vm01 bash[20698]: audit 2026-03-09T17:03:36.624650+0000 mon.a (mon.0) 110 : audit [INF] from='client.? 192.168.123.101:0/1729850514' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T17:03:37.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:37 vm01 bash[20698]: audit 2026-03-09T17:03:36.624650+0000 mon.a (mon.0) 110 : audit [INF] from='client.? 192.168.123.101:0/1729850514' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T17:03:37.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:37 vm01 bash[20698]: cluster 2026-03-09T17:03:36.626806+0000 mon.a (mon.0) 111 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:03:37.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:37 vm01 bash[20698]: cluster 2026-03-09T17:03:36.626806+0000 mon.a (mon.0) 111 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T17:03:40.323 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:03:40.600 INFO:teuthology.orchestra.run.vm01.stdout:Scheduled mon update... 2026-03-09T17:03:40.674 INFO:tasks.cephadm:Waiting for 1 mons in monmap... 2026-03-09T17:03:40.674 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph mon dump -f json 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.594513+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "1;vm01:192.168.123.101=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.594513+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "1;vm01:192.168.123.101=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: cephadm 2026-03-09T17:03:40.595665+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Saving service mon spec with placement vm01:192.168.123.101=a;count:1 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: cephadm 2026-03-09T17:03:40.595665+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Saving service mon spec with placement vm01:192.168.123.101=a;count:1 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.599230+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.599230+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.599954+0000 mon.a (mon.0) 113 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.599954+0000 mon.a (mon.0) 113 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.601097+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.601097+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.601479+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.601479+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: cluster 2026-03-09T17:03:40.602052+0000 mgr.a (mgr.14150) 17 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: cluster 2026-03-09T17:03:40.602052+0000 mgr.a (mgr.14150) 17 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.605928+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.605928+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.608213+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.608213+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.615677+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.615677+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.618253+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.618253+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.623247+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.623247+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.627789+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.627789+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: cephadm 2026-03-09T17:03:40.628146+0000 mgr.a (mgr.14150) 18 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: cephadm 2026-03-09T17:03:40.628146+0000 mgr.a (mgr.14150) 18 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.628341+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.628341+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.628847+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.628847+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.629320+0000 mon.a (mon.0) 124 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:40.629320+0000 mon.a (mon.0) 124 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: cephadm 2026-03-09T17:03:40.629906+0000 mgr.a (mgr.14150) 19 : cephadm [INF] Reconfiguring daemon mon.a on vm01 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: cephadm 2026-03-09T17:03:40.629906+0000 mgr.a (mgr.14150) 19 : cephadm [INF] Reconfiguring daemon mon.a on vm01 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:41.024407+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:41.024407+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:41.026787+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:41.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:41 vm01 bash[20698]: audit 2026-03-09T17:03:41.026787+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:43.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:43 vm01 bash[20698]: cluster 2026-03-09T17:03:42.602233+0000 mgr.a (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:43.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:43 vm01 bash[20698]: cluster 2026-03-09T17:03:42.602233+0000 mgr.a (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:44.332 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:03:44.642 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:03:44.642 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":1,"fsid":"adad5454-1bd9-11f1-a78e-99ee5fbec3ab","modified":"2026-03-09T17:02:58.064101Z","created":"2026-03-09T17:02:58.064101Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T17:03:44.642 INFO:teuthology.orchestra.run.vm01.stderr:dumped monmap epoch 1 2026-03-09T17:03:44.707 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T17:03:44.707 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph config generate-minimal-conf 2026-03-09T17:03:44.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:44 vm01 bash[20698]: audit 2026-03-09T17:03:44.642164+0000 mon.a (mon.0) 127 : audit [DBG] from='client.? 192.168.123.101:0/1937223195' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T17:03:44.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:44 vm01 bash[20698]: audit 2026-03-09T17:03:44.642164+0000 mon.a (mon.0) 127 : audit [DBG] from='client.? 192.168.123.101:0/1937223195' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T17:03:46.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:45 vm01 bash[20698]: cluster 2026-03-09T17:03:44.602452+0000 mgr.a (mgr.14150) 21 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:46.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:45 vm01 bash[20698]: cluster 2026-03-09T17:03:44.602452+0000 mgr.a (mgr.14150) 21 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:48.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:47 vm01 bash[20698]: cluster 2026-03-09T17:03:46.602687+0000 mgr.a (mgr.14150) 22 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:48.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:47 vm01 bash[20698]: cluster 2026-03-09T17:03:46.602687+0000 mgr.a (mgr.14150) 22 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:48.341 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:03:48.589 INFO:teuthology.orchestra.run.vm01.stdout:# minimal ceph.conf for adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:03:48.590 INFO:teuthology.orchestra.run.vm01.stdout:[global] 2026-03-09T17:03:48.590 INFO:teuthology.orchestra.run.vm01.stdout: fsid = adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:03:48.590 INFO:teuthology.orchestra.run.vm01.stdout: mon_host = [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] 2026-03-09T17:03:48.641 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T17:03:48.641 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:03:48.641 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T17:03:48.649 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:03:48.649 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T17:03:48.700 INFO:tasks.cephadm:Adding mgr.a on vm01 2026-03-09T17:03:48.701 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph orch apply mgr '1;vm01=a' 2026-03-09T17:03:48.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:48 vm01 bash[20698]: audit 2026-03-09T17:03:48.589472+0000 mon.a (mon.0) 128 : audit [DBG] from='client.? 192.168.123.101:0/2250897088' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:03:48.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:48 vm01 bash[20698]: audit 2026-03-09T17:03:48.589472+0000 mon.a (mon.0) 128 : audit [DBG] from='client.? 192.168.123.101:0/2250897088' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:03:50.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:49 vm01 bash[20698]: cluster 2026-03-09T17:03:48.602895+0000 mgr.a (mgr.14150) 23 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:50.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:49 vm01 bash[20698]: cluster 2026-03-09T17:03:48.602895+0000 mgr.a (mgr.14150) 23 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:52.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:51 vm01 bash[20698]: cluster 2026-03-09T17:03:50.603064+0000 mgr.a (mgr.14150) 24 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:52.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:51 vm01 bash[20698]: cluster 2026-03-09T17:03:50.603064+0000 mgr.a (mgr.14150) 24 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:53.353 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:03:53.800 INFO:teuthology.orchestra.run.vm01.stdout:Scheduled mgr update... 2026-03-09T17:03:53.880 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T17:03:53.880 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:03:53.880 DEBUG:teuthology.orchestra.run.vm01:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T17:03:53.883 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T17:03:53.883 DEBUG:teuthology.orchestra.run.vm01:> ls /dev/[sv]d? 2026-03-09T17:03:53.927 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vda 2026-03-09T17:03:53.927 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdb 2026-03-09T17:03:53.927 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdc 2026-03-09T17:03:53.927 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdd 2026-03-09T17:03:53.927 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vde 2026-03-09T17:03:53.927 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T17:03:53.928 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T17:03:53.928 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdb 2026-03-09T17:03:53.971 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdb 2026-03-09T17:03:53.971 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T17:03:53.971 INFO:teuthology.orchestra.run.vm01.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T17:03:53.971 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T17:03:53.971 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-09 16:58:32.814060876 +0000 2026-03-09T17:03:53.971 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-09 16:58:31.746060876 +0000 2026-03-09T17:03:53.971 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-09 16:58:31.746060876 +0000 2026-03-09T17:03:53.971 INFO:teuthology.orchestra.run.vm01.stdout: Birth: - 2026-03-09T17:03:53.971 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T17:03:54.017 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:53 vm01 bash[20698]: cluster 2026-03-09T17:03:52.603219+0000 mgr.a (mgr.14150) 25 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:54.017 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:53 vm01 bash[20698]: cluster 2026-03-09T17:03:52.603219+0000 mgr.a (mgr.14150) 25 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:54.018 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-09T17:03:54.019 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-09T17:03:54.019 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000164488 s, 3.1 MB/s 2026-03-09T17:03:54.019 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T17:03:54.065 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdc 2026-03-09T17:03:54.111 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdc 2026-03-09T17:03:54.111 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T17:03:54.111 INFO:teuthology.orchestra.run.vm01.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T17:03:54.111 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T17:03:54.111 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-09 16:58:32.822060876 +0000 2026-03-09T17:03:54.111 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-09 16:58:31.750060876 +0000 2026-03-09T17:03:54.111 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-09 16:58:31.750060876 +0000 2026-03-09T17:03:54.111 INFO:teuthology.orchestra.run.vm01.stdout: Birth: - 2026-03-09T17:03:54.111 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T17:03:54.163 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-09T17:03:54.163 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-09T17:03:54.163 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000221735 s, 2.3 MB/s 2026-03-09T17:03:54.163 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T17:03:54.208 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdd 2026-03-09T17:03:54.256 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdd 2026-03-09T17:03:54.256 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T17:03:54.256 INFO:teuthology.orchestra.run.vm01.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T17:03:54.256 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T17:03:54.256 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-09 16:58:32.814060876 +0000 2026-03-09T17:03:54.256 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-09 16:58:31.746060876 +0000 2026-03-09T17:03:54.256 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-09 16:58:31.746060876 +0000 2026-03-09T17:03:54.256 INFO:teuthology.orchestra.run.vm01.stdout: Birth: - 2026-03-09T17:03:54.256 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T17:03:54.303 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-09T17:03:54.310 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-09T17:03:54.310 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000190707 s, 2.7 MB/s 2026-03-09T17:03:54.311 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T17:03:54.357 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vde 2026-03-09T17:03:54.407 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vde 2026-03-09T17:03:54.407 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T17:03:54.407 INFO:teuthology.orchestra.run.vm01.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T17:03:54.407 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T17:03:54.407 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-09 16:58:32.822060876 +0000 2026-03-09T17:03:54.407 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-09 16:58:31.734060876 +0000 2026-03-09T17:03:54.407 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-09 16:58:31.734060876 +0000 2026-03-09T17:03:54.407 INFO:teuthology.orchestra.run.vm01.stdout: Birth: - 2026-03-09T17:03:54.408 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T17:03:54.455 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-09T17:03:54.455 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-09T17:03:54.455 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000185818 s, 2.8 MB/s 2026-03-09T17:03:54.456 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T17:03:54.501 INFO:tasks.cephadm:Deploying osd.0 on vm01 with /dev/vde... 2026-03-09T17:03:54.501 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- lvm zap /dev/vde 2026-03-09T17:03:55.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.795526+0000 mgr.a (mgr.14150) 26 : audit [DBG] from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm01=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:55.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.795526+0000 mgr.a (mgr.14150) 26 : audit [DBG] from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm01=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:03:55.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: cephadm 2026-03-09T17:03:53.796357+0000 mgr.a (mgr.14150) 27 : cephadm [INF] Saving service mgr spec with placement vm01=a;count:1 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: cephadm 2026-03-09T17:03:53.796357+0000 mgr.a (mgr.14150) 27 : cephadm [INF] Saving service mgr spec with placement vm01=a;count:1 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.798869+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.798869+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.799452+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.799452+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.800234+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.800234+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.800633+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.800633+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.803269+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.803269+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.805609+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.805609+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: cephadm 2026-03-09T17:03:53.810775+0000 mgr.a (mgr.14150) 28 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: cephadm 2026-03-09T17:03:53.810775+0000 mgr.a (mgr.14150) 28 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.811055+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.811055+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.811684+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.811684+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.812476+0000 mon.a (mon.0) 137 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:53.812476+0000 mon.a (mon.0) 137 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: cephadm 2026-03-09T17:03:53.812988+0000 mgr.a (mgr.14150) 29 : cephadm [INF] Reconfiguring daemon mgr.a on vm01 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: cephadm 2026-03-09T17:03:53.812988+0000 mgr.a (mgr.14150) 29 : cephadm [INF] Reconfiguring daemon mgr.a on vm01 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:54.408350+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:54.408350+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:54.411587+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:55.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:54 vm01 bash[20698]: audit 2026-03-09T17:03:54.411587+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:03:56.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:55 vm01 bash[20698]: cluster 2026-03-09T17:03:54.603391+0000 mgr.a (mgr.14150) 30 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:56.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:55 vm01 bash[20698]: cluster 2026-03-09T17:03:54.603391+0000 mgr.a (mgr.14150) 30 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:58.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:57 vm01 bash[20698]: cluster 2026-03-09T17:03:56.603578+0000 mgr.a (mgr.14150) 31 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:58.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:57 vm01 bash[20698]: cluster 2026-03-09T17:03:56.603578+0000 mgr.a (mgr.14150) 31 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:03:59.164 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:04:00.054 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:04:00.072 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph orch daemon add osd vm01:/dev/vde 2026-03-09T17:04:00.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:59 vm01 bash[20698]: cluster 2026-03-09T17:03:58.603773+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:00.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:03:59 vm01 bash[20698]: cluster 2026-03-09T17:03:58.603773+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:02.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:01 vm01 bash[20698]: cluster 2026-03-09T17:04:00.603971+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:02.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:01 vm01 bash[20698]: cluster 2026-03-09T17:04:00.603971+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:04.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:04 vm01 bash[20698]: cluster 2026-03-09T17:04:02.604173+0000 mgr.a (mgr.14150) 34 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:04.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:04 vm01 bash[20698]: cluster 2026-03-09T17:04:02.604173+0000 mgr.a (mgr.14150) 34 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:04.692 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:04:06.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:06 vm01 bash[20698]: cluster 2026-03-09T17:04:04.604356+0000 mgr.a (mgr.14150) 35 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:06.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:06 vm01 bash[20698]: cluster 2026-03-09T17:04:04.604356+0000 mgr.a (mgr.14150) 35 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:06.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:06 vm01 bash[20698]: audit 2026-03-09T17:04:05.086360+0000 mgr.a (mgr.14150) 36 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:04:06.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:06 vm01 bash[20698]: audit 2026-03-09T17:04:05.086360+0000 mgr.a (mgr.14150) 36 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:04:06.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:06 vm01 bash[20698]: audit 2026-03-09T17:04:05.087730+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:04:06.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:06 vm01 bash[20698]: audit 2026-03-09T17:04:05.087730+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T17:04:06.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:06 vm01 bash[20698]: audit 2026-03-09T17:04:05.088951+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:04:06.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:06 vm01 bash[20698]: audit 2026-03-09T17:04:05.088951+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T17:04:06.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:06 vm01 bash[20698]: audit 2026-03-09T17:04:05.089329+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:04:06.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:06 vm01 bash[20698]: audit 2026-03-09T17:04:05.089329+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:04:08.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:08 vm01 bash[20698]: cluster 2026-03-09T17:04:06.604530+0000 mgr.a (mgr.14150) 37 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:08.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:08 vm01 bash[20698]: cluster 2026-03-09T17:04:06.604530+0000 mgr.a (mgr.14150) 37 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:10.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:10 vm01 bash[20698]: cluster 2026-03-09T17:04:08.604686+0000 mgr.a (mgr.14150) 38 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:10.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:10 vm01 bash[20698]: cluster 2026-03-09T17:04:08.604686+0000 mgr.a (mgr.14150) 38 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:11.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:11 vm01 bash[20698]: audit 2026-03-09T17:04:10.625762+0000 mon.a (mon.0) 143 : audit [INF] from='client.? 192.168.123.101:0/56162004' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "14cb9f16-345f-4192-8f2a-c7b83d8d25dc"}]: dispatch 2026-03-09T17:04:11.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:11 vm01 bash[20698]: audit 2026-03-09T17:04:10.625762+0000 mon.a (mon.0) 143 : audit [INF] from='client.? 192.168.123.101:0/56162004' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "14cb9f16-345f-4192-8f2a-c7b83d8d25dc"}]: dispatch 2026-03-09T17:04:11.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:11 vm01 bash[20698]: audit 2026-03-09T17:04:10.627529+0000 mon.a (mon.0) 144 : audit [INF] from='client.? 192.168.123.101:0/56162004' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "14cb9f16-345f-4192-8f2a-c7b83d8d25dc"}]': finished 2026-03-09T17:04:11.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:11 vm01 bash[20698]: audit 2026-03-09T17:04:10.627529+0000 mon.a (mon.0) 144 : audit [INF] from='client.? 192.168.123.101:0/56162004' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "14cb9f16-345f-4192-8f2a-c7b83d8d25dc"}]': finished 2026-03-09T17:04:11.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:11 vm01 bash[20698]: cluster 2026-03-09T17:04:10.629613+0000 mon.a (mon.0) 145 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T17:04:11.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:11 vm01 bash[20698]: cluster 2026-03-09T17:04:10.629613+0000 mon.a (mon.0) 145 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T17:04:11.361 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:11 vm01 bash[20698]: audit 2026-03-09T17:04:10.629709+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:04:11.362 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:11 vm01 bash[20698]: audit 2026-03-09T17:04:10.629709+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:04:12.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:12 vm01 bash[20698]: cluster 2026-03-09T17:04:10.604836+0000 mgr.a (mgr.14150) 39 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:12.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:12 vm01 bash[20698]: cluster 2026-03-09T17:04:10.604836+0000 mgr.a (mgr.14150) 39 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:12.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:12 vm01 bash[20698]: audit 2026-03-09T17:04:11.233190+0000 mon.a (mon.0) 147 : audit [DBG] from='client.? 192.168.123.101:0/3747915581' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:04:12.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:12 vm01 bash[20698]: audit 2026-03-09T17:04:11.233190+0000 mon.a (mon.0) 147 : audit [DBG] from='client.? 192.168.123.101:0/3747915581' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T17:04:14.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:14 vm01 bash[20698]: cluster 2026-03-09T17:04:12.605039+0000 mgr.a (mgr.14150) 40 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:14.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:14 vm01 bash[20698]: cluster 2026-03-09T17:04:12.605039+0000 mgr.a (mgr.14150) 40 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:15.656 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:15 vm01 bash[20698]: cluster 2026-03-09T17:04:14.605242+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:15.656 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:15 vm01 bash[20698]: cluster 2026-03-09T17:04:14.605242+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:18.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:17 vm01 bash[20698]: cluster 2026-03-09T17:04:16.605506+0000 mgr.a (mgr.14150) 42 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:18.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:17 vm01 bash[20698]: cluster 2026-03-09T17:04:16.605506+0000 mgr.a (mgr.14150) 42 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:20.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:19 vm01 bash[20698]: cluster 2026-03-09T17:04:18.605794+0000 mgr.a (mgr.14150) 43 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:20.092 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:19 vm01 bash[20698]: cluster 2026-03-09T17:04:18.605794+0000 mgr.a (mgr.14150) 43 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:20.957 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:04:20 vm01 systemd[1]: /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:04:20.957 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:20 vm01 bash[20698]: audit 2026-03-09T17:04:20.132560+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T17:04:20.957 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:20 vm01 bash[20698]: audit 2026-03-09T17:04:20.132560+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T17:04:20.957 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:20 vm01 bash[20698]: audit 2026-03-09T17:04:20.133136+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:04:20.957 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:20 vm01 bash[20698]: audit 2026-03-09T17:04:20.133136+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:04:20.957 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:20 vm01 bash[20698]: cephadm 2026-03-09T17:04:20.133579+0000 mgr.a (mgr.14150) 44 : cephadm [INF] Deploying daemon osd.0 on vm01 2026-03-09T17:04:20.957 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:20 vm01 bash[20698]: cephadm 2026-03-09T17:04:20.133579+0000 mgr.a (mgr.14150) 44 : cephadm [INF] Deploying daemon osd.0 on vm01 2026-03-09T17:04:20.957 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:20 vm01 systemd[1]: /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:04:21.223 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:21 vm01 systemd[1]: /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:04:21.223 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:04:21 vm01 systemd[1]: /etc/systemd/system/ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T17:04:22.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:21 vm01 bash[20698]: cluster 2026-03-09T17:04:20.605982+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:22.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:21 vm01 bash[20698]: cluster 2026-03-09T17:04:20.605982+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:22.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:21 vm01 bash[20698]: audit 2026-03-09T17:04:21.158415+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:04:22.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:21 vm01 bash[20698]: audit 2026-03-09T17:04:21.158415+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:04:22.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:21 vm01 bash[20698]: audit 2026-03-09T17:04:21.161157+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:22.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:21 vm01 bash[20698]: audit 2026-03-09T17:04:21.161157+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:22.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:21 vm01 bash[20698]: audit 2026-03-09T17:04:21.165316+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:22.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:21 vm01 bash[20698]: audit 2026-03-09T17:04:21.165316+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:23.987 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:23 vm01 bash[20698]: cluster 2026-03-09T17:04:22.606230+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:23.987 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:23 vm01 bash[20698]: cluster 2026-03-09T17:04:22.606230+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:26.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:25 vm01 bash[20698]: cluster 2026-03-09T17:04:24.606480+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:26.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:25 vm01 bash[20698]: cluster 2026-03-09T17:04:24.606480+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:26.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:25 vm01 bash[20698]: audit 2026-03-09T17:04:25.192506+0000 mon.a (mon.0) 153 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/3222912846,v1:192.168.123.101:6803/3222912846]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T17:04:26.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:25 vm01 bash[20698]: audit 2026-03-09T17:04:25.192506+0000 mon.a (mon.0) 153 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/3222912846,v1:192.168.123.101:6803/3222912846]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T17:04:27.085 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:26 vm01 bash[20698]: audit 2026-03-09T17:04:25.676604+0000 mon.a (mon.0) 154 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/3222912846,v1:192.168.123.101:6803/3222912846]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T17:04:27.085 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:26 vm01 bash[20698]: audit 2026-03-09T17:04:25.676604+0000 mon.a (mon.0) 154 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/3222912846,v1:192.168.123.101:6803/3222912846]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T17:04:27.085 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:26 vm01 bash[20698]: cluster 2026-03-09T17:04:25.678319+0000 mon.a (mon.0) 155 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T17:04:27.085 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:26 vm01 bash[20698]: cluster 2026-03-09T17:04:25.678319+0000 mon.a (mon.0) 155 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T17:04:27.085 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:26 vm01 bash[20698]: audit 2026-03-09T17:04:25.678680+0000 mon.a (mon.0) 156 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/3222912846,v1:192.168.123.101:6803/3222912846]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T17:04:27.085 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:26 vm01 bash[20698]: audit 2026-03-09T17:04:25.678680+0000 mon.a (mon.0) 156 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/3222912846,v1:192.168.123.101:6803/3222912846]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-09T17:04:27.085 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:26 vm01 bash[20698]: audit 2026-03-09T17:04:25.678802+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:04:27.085 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:26 vm01 bash[20698]: audit 2026-03-09T17:04:25.678802+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:04:28.045 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: cluster 2026-03-09T17:04:26.606667+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:28.045 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: cluster 2026-03-09T17:04:26.606667+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:26.678219+0000 mon.a (mon.0) 158 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/3222912846,v1:192.168.123.101:6803/3222912846]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:26.678219+0000 mon.a (mon.0) 158 : audit [INF] from='osd.0 [v2:192.168.123.101:6802/3222912846,v1:192.168.123.101:6803/3222912846]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: cluster 2026-03-09T17:04:26.680112+0000 mon.a (mon.0) 159 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: cluster 2026-03-09T17:04:26.680112+0000 mon.a (mon.0) 159 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:26.680906+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:26.680906+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:26.685352+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:26.685352+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:27.260150+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:27.260150+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:27.263261+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:27.263261+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:27.649314+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:27.649314+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:27.650027+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:27.650027+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:27.653478+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:28.046 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:27 vm01 bash[20698]: audit 2026-03-09T17:04:27.653478+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:28.341 INFO:teuthology.orchestra.run.vm01.stdout:Created osd(s) 0 on host 'vm01' 2026-03-09T17:04:28.422 DEBUG:teuthology.orchestra.run.vm01:osd.0> sudo journalctl -f -n 0 -u ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@osd.0.service 2026-03-09T17:04:28.423 INFO:tasks.cephadm:Waiting for 1 OSDs to come up... 2026-03-09T17:04:28.423 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph osd stat -f json 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: cluster 2026-03-09T17:04:26.141728+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: cluster 2026-03-09T17:04:26.141728+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: cluster 2026-03-09T17:04:26.141792+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: cluster 2026-03-09T17:04:26.141792+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: audit 2026-03-09T17:04:27.683230+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: audit 2026-03-09T17:04:27.683230+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: cluster 2026-03-09T17:04:27.685854+0000 mon.a (mon.0) 168 : cluster [INF] osd.0 [v2:192.168.123.101:6802/3222912846,v1:192.168.123.101:6803/3222912846] boot 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: cluster 2026-03-09T17:04:27.685854+0000 mon.a (mon.0) 168 : cluster [INF] osd.0 [v2:192.168.123.101:6802/3222912846,v1:192.168.123.101:6803/3222912846] boot 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: cluster 2026-03-09T17:04:27.685935+0000 mon.a (mon.0) 169 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: cluster 2026-03-09T17:04:27.685935+0000 mon.a (mon.0) 169 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: audit 2026-03-09T17:04:27.686943+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: audit 2026-03-09T17:04:27.686943+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: audit 2026-03-09T17:04:28.333574+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: audit 2026-03-09T17:04:28.333574+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: audit 2026-03-09T17:04:28.336481+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: audit 2026-03-09T17:04:28.336481+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: audit 2026-03-09T17:04:28.339368+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:28.695 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:28 vm01 bash[20698]: audit 2026-03-09T17:04:28.339368+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:30.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:29 vm01 bash[20698]: cluster 2026-03-09T17:04:28.606892+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:30.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:29 vm01 bash[20698]: cluster 2026-03-09T17:04:28.606892+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T17:04:30.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:29 vm01 bash[20698]: cluster 2026-03-09T17:04:28.691029+0000 mon.a (mon.0) 174 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T17:04:30.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:29 vm01 bash[20698]: cluster 2026-03-09T17:04:28.691029+0000 mon.a (mon.0) 174 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T17:04:32.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:31 vm01 bash[20698]: cluster 2026-03-09T17:04:30.607098+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:32.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:31 vm01 bash[20698]: cluster 2026-03-09T17:04:30.607098+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:33.081 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:04:33.361 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:04:33.426 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":9,"num_osds":1,"num_up_osds":1,"osd_up_since":1773075867,"num_in_osds":1,"osd_in_since":1773075850,"num_remapped_pgs":0} 2026-03-09T17:04:33.426 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph osd dump --format=json 2026-03-09T17:04:34.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:33 vm01 bash[20698]: cluster 2026-03-09T17:04:32.607320+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:34.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:33 vm01 bash[20698]: cluster 2026-03-09T17:04:32.607320+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:34.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:33 vm01 bash[20698]: audit 2026-03-09T17:04:33.360754+0000 mon.a (mon.0) 175 : audit [DBG] from='client.? 192.168.123.101:0/1121832266' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T17:04:34.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:33 vm01 bash[20698]: audit 2026-03-09T17:04:33.360754+0000 mon.a (mon.0) 175 : audit [DBG] from='client.? 192.168.123.101:0/1121832266' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: cephadm 2026-03-09T17:04:33.925666+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: cephadm 2026-03-09T17:04:33.925666+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Detected new or changed devices on vm01 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: audit 2026-03-09T17:04:33.929215+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: audit 2026-03-09T17:04:33.929215+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: audit 2026-03-09T17:04:33.931896+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: audit 2026-03-09T17:04:33.931896+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: audit 2026-03-09T17:04:33.932530+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: audit 2026-03-09T17:04:33.932530+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: cephadm 2026-03-09T17:04:33.932837+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Adjusting osd_memory_target on vm01 to 455.7M 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: cephadm 2026-03-09T17:04:33.932837+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Adjusting osd_memory_target on vm01 to 455.7M 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: cephadm 2026-03-09T17:04:33.933180+0000 mgr.a (mgr.14150) 54 : cephadm [WRN] Unable to set osd_memory_target on vm01 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: cephadm 2026-03-09T17:04:33.933180+0000 mgr.a (mgr.14150) 54 : cephadm [WRN] Unable to set osd_memory_target on vm01 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: audit 2026-03-09T17:04:33.933437+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: audit 2026-03-09T17:04:33.933437+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: audit 2026-03-09T17:04:33.933772+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: audit 2026-03-09T17:04:33.933772+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: audit 2026-03-09T17:04:33.936101+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:35.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:34 vm01 bash[20698]: audit 2026-03-09T17:04:33.936101+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:04:36.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:35 vm01 bash[20698]: cluster 2026-03-09T17:04:34.607558+0000 mgr.a (mgr.14150) 55 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:36.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:35 vm01 bash[20698]: cluster 2026-03-09T17:04:34.607558+0000 mgr.a (mgr.14150) 55 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:37.096 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:04:37.339 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:04:37.339 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":9,"fsid":"adad5454-1bd9-11f1-a78e-99ee5fbec3ab","created":"2026-03-09T17:02:59.349432+0000","modified":"2026-03-09T17:04:28.687756+0000","last_up_change":"2026-03-09T17:04:27.681638+0000","last_in_change":"2026-03-09T17:04:10.626065+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":1,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"14cb9f16-345f-4192-8f2a-c7b83d8d25dc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":3222912846},{"type":"v1","addr":"192.168.123.101:6803","nonce":3222912846}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":3222912846},{"type":"v1","addr":"192.168.123.101:6805","nonce":3222912846}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3222912846},{"type":"v1","addr":"192.168.123.101:6809","nonce":3222912846}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":3222912846},{"type":"v1","addr":"192.168.123.101:6807","nonce":3222912846}]},"public_addr":"192.168.123.101:6803/3222912846","cluster_addr":"192.168.123.101:6805/3222912846","heartbeat_back_addr":"192.168.123.101:6809/3222912846","heartbeat_front_addr":"192.168.123.101:6807/3222912846","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:04:26.141793+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:6801/2358450597":"2026-03-10T17:03:20.590428+0000","192.168.123.101:0/1676793419":"2026-03-10T17:03:20.590428+0000","192.168.123.101:0/656202019":"2026-03-10T17:03:20.590428+0000","192.168.123.101:0/2513887716":"2026-03-10T17:03:09.906968+0000","192.168.123.101:0/2597834386":"2026-03-10T17:03:09.906968+0000","192.168.123.101:0/175366695":"2026-03-10T17:03:20.590428+0000","192.168.123.101:0/2754850381":"2026-03-10T17:03:09.906968+0000","192.168.123.101:6800/2358450597":"2026-03-10T17:03:20.590428+0000","192.168.123.101:6801/370494539":"2026-03-10T17:03:09.906968+0000","192.168.123.101:6800/370494539":"2026-03-10T17:03:09.906968+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T17:04:37.392 INFO:tasks.cephadm.ceph_manager.ceph:[] 2026-03-09T17:04:37.393 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T17:04:37.393 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T17:04:38.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:37 vm01 bash[20698]: cluster 2026-03-09T17:04:36.607761+0000 mgr.a (mgr.14150) 56 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:38.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:37 vm01 bash[20698]: cluster 2026-03-09T17:04:36.607761+0000 mgr.a (mgr.14150) 56 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:38.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:37 vm01 bash[20698]: audit 2026-03-09T17:04:37.339024+0000 mon.a (mon.0) 182 : audit [DBG] from='client.? 192.168.123.101:0/3199733615' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:04:38.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:37 vm01 bash[20698]: audit 2026-03-09T17:04:37.339024+0000 mon.a (mon.0) 182 : audit [DBG] from='client.? 192.168.123.101:0/3199733615' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:04:40.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:39 vm01 bash[20698]: cluster 2026-03-09T17:04:38.608000+0000 mgr.a (mgr.14150) 57 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:40.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:39 vm01 bash[20698]: cluster 2026-03-09T17:04:38.608000+0000 mgr.a (mgr.14150) 57 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:41.107 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:04:41.383 INFO:teuthology.orchestra.run.vm01.stdout:[client.0] 2026-03-09T17:04:41.383 INFO:teuthology.orchestra.run.vm01.stdout: key = AQCp/a5prFmrFhAAbuStLaLkdNUy/8m9cEMXTA== 2026-03-09T17:04:41.441 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:04:41.441 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T17:04:41.441 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T17:04:41.452 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T17:04:41.452 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T17:04:41.452 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph mgr dump --format=json 2026-03-09T17:04:42.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:41 vm01 bash[20698]: cluster 2026-03-09T17:04:40.608211+0000 mgr.a (mgr.14150) 58 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:42.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:41 vm01 bash[20698]: cluster 2026-03-09T17:04:40.608211+0000 mgr.a (mgr.14150) 58 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:42.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:41 vm01 bash[20698]: audit 2026-03-09T17:04:41.380198+0000 mon.a (mon.0) 183 : audit [INF] from='client.? 192.168.123.101:0/598143879' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:04:42.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:41 vm01 bash[20698]: audit 2026-03-09T17:04:41.380198+0000 mon.a (mon.0) 183 : audit [INF] from='client.? 192.168.123.101:0/598143879' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T17:04:42.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:41 vm01 bash[20698]: audit 2026-03-09T17:04:41.381588+0000 mon.a (mon.0) 184 : audit [INF] from='client.? 192.168.123.101:0/598143879' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:04:42.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:41 vm01 bash[20698]: audit 2026-03-09T17:04:41.381588+0000 mon.a (mon.0) 184 : audit [INF] from='client.? 192.168.123.101:0/598143879' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T17:04:44.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:43 vm01 bash[20698]: cluster 2026-03-09T17:04:42.608446+0000 mgr.a (mgr.14150) 59 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:44.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:43 vm01 bash[20698]: cluster 2026-03-09T17:04:42.608446+0000 mgr.a (mgr.14150) 59 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:45.116 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:04:45.388 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:04:45.435 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":12,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6800","nonce":2299906255},{"type":"v1","addr":"192.168.123.101:6801","nonce":2299906255}]},"active_addr":"192.168.123.101:6801/2299906255","active_change":"2026-03-09T17:03:20.590697+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.101:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":1794445077}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":4090824976}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":3218459890}]}]} 2026-03-09T17:04:45.436 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T17:04:45.436 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T17:04:45.436 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph osd dump --format=json 2026-03-09T17:04:46.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:45 vm01 bash[20698]: cluster 2026-03-09T17:04:44.608701+0000 mgr.a (mgr.14150) 60 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:46.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:45 vm01 bash[20698]: cluster 2026-03-09T17:04:44.608701+0000 mgr.a (mgr.14150) 60 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:46.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:45 vm01 bash[20698]: audit 2026-03-09T17:04:45.387113+0000 mon.a (mon.0) 185 : audit [DBG] from='client.? 192.168.123.101:0/919042967' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T17:04:46.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:45 vm01 bash[20698]: audit 2026-03-09T17:04:45.387113+0000 mon.a (mon.0) 185 : audit [DBG] from='client.? 192.168.123.101:0/919042967' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T17:04:48.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:47 vm01 bash[20698]: cluster 2026-03-09T17:04:46.608895+0000 mgr.a (mgr.14150) 61 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:48.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:47 vm01 bash[20698]: cluster 2026-03-09T17:04:46.608895+0000 mgr.a (mgr.14150) 61 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:49.127 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:04:49.361 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:04:49.361 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":9,"fsid":"adad5454-1bd9-11f1-a78e-99ee5fbec3ab","created":"2026-03-09T17:02:59.349432+0000","modified":"2026-03-09T17:04:28.687756+0000","last_up_change":"2026-03-09T17:04:27.681638+0000","last_in_change":"2026-03-09T17:04:10.626065+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":1,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"14cb9f16-345f-4192-8f2a-c7b83d8d25dc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":3222912846},{"type":"v1","addr":"192.168.123.101:6803","nonce":3222912846}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":3222912846},{"type":"v1","addr":"192.168.123.101:6805","nonce":3222912846}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3222912846},{"type":"v1","addr":"192.168.123.101:6809","nonce":3222912846}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":3222912846},{"type":"v1","addr":"192.168.123.101:6807","nonce":3222912846}]},"public_addr":"192.168.123.101:6803/3222912846","cluster_addr":"192.168.123.101:6805/3222912846","heartbeat_back_addr":"192.168.123.101:6809/3222912846","heartbeat_front_addr":"192.168.123.101:6807/3222912846","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:04:26.141793+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:6801/2358450597":"2026-03-10T17:03:20.590428+0000","192.168.123.101:0/1676793419":"2026-03-10T17:03:20.590428+0000","192.168.123.101:0/656202019":"2026-03-10T17:03:20.590428+0000","192.168.123.101:0/2513887716":"2026-03-10T17:03:09.906968+0000","192.168.123.101:0/2597834386":"2026-03-10T17:03:09.906968+0000","192.168.123.101:0/175366695":"2026-03-10T17:03:20.590428+0000","192.168.123.101:0/2754850381":"2026-03-10T17:03:09.906968+0000","192.168.123.101:6800/2358450597":"2026-03-10T17:03:20.590428+0000","192.168.123.101:6801/370494539":"2026-03-10T17:03:09.906968+0000","192.168.123.101:6800/370494539":"2026-03-10T17:03:09.906968+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T17:04:49.416 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T17:04:49.416 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph osd dump --format=json 2026-03-09T17:04:50.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:49 vm01 bash[20698]: cluster 2026-03-09T17:04:48.609181+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:50.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:49 vm01 bash[20698]: cluster 2026-03-09T17:04:48.609181+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:50.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:49 vm01 bash[20698]: audit 2026-03-09T17:04:49.360588+0000 mon.a (mon.0) 186 : audit [DBG] from='client.? 192.168.123.101:0/680169307' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:04:50.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:49 vm01 bash[20698]: audit 2026-03-09T17:04:49.360588+0000 mon.a (mon.0) 186 : audit [DBG] from='client.? 192.168.123.101:0/680169307' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:04:52.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:51 vm01 bash[20698]: cluster 2026-03-09T17:04:50.609376+0000 mgr.a (mgr.14150) 63 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:52.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:51 vm01 bash[20698]: cluster 2026-03-09T17:04:50.609376+0000 mgr.a (mgr.14150) 63 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:53.139 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:04:53.387 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:04:53.387 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":9,"fsid":"adad5454-1bd9-11f1-a78e-99ee5fbec3ab","created":"2026-03-09T17:02:59.349432+0000","modified":"2026-03-09T17:04:28.687756+0000","last_up_change":"2026-03-09T17:04:27.681638+0000","last_in_change":"2026-03-09T17:04:10.626065+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":1,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"14cb9f16-345f-4192-8f2a-c7b83d8d25dc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":3222912846},{"type":"v1","addr":"192.168.123.101:6803","nonce":3222912846}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":3222912846},{"type":"v1","addr":"192.168.123.101:6805","nonce":3222912846}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3222912846},{"type":"v1","addr":"192.168.123.101:6809","nonce":3222912846}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":3222912846},{"type":"v1","addr":"192.168.123.101:6807","nonce":3222912846}]},"public_addr":"192.168.123.101:6803/3222912846","cluster_addr":"192.168.123.101:6805/3222912846","heartbeat_back_addr":"192.168.123.101:6809/3222912846","heartbeat_front_addr":"192.168.123.101:6807/3222912846","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T17:04:26.141793+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:6801/2358450597":"2026-03-10T17:03:20.590428+0000","192.168.123.101:0/1676793419":"2026-03-10T17:03:20.590428+0000","192.168.123.101:0/656202019":"2026-03-10T17:03:20.590428+0000","192.168.123.101:0/2513887716":"2026-03-10T17:03:09.906968+0000","192.168.123.101:0/2597834386":"2026-03-10T17:03:09.906968+0000","192.168.123.101:0/175366695":"2026-03-10T17:03:20.590428+0000","192.168.123.101:0/2754850381":"2026-03-10T17:03:09.906968+0000","192.168.123.101:6800/2358450597":"2026-03-10T17:03:20.590428+0000","192.168.123.101:6801/370494539":"2026-03-10T17:03:09.906968+0000","192.168.123.101:6800/370494539":"2026-03-10T17:03:09.906968+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T17:04:53.436 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph tell osd.0 flush_pg_stats 2026-03-09T17:04:54.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:53 vm01 bash[20698]: cluster 2026-03-09T17:04:52.609627+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:54.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:53 vm01 bash[20698]: cluster 2026-03-09T17:04:52.609627+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:54.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:53 vm01 bash[20698]: audit 2026-03-09T17:04:53.386846+0000 mon.a (mon.0) 187 : audit [DBG] from='client.? 192.168.123.101:0/4253228261' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:04:54.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:53 vm01 bash[20698]: audit 2026-03-09T17:04:53.386846+0000 mon.a (mon.0) 187 : audit [DBG] from='client.? 192.168.123.101:0/4253228261' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T17:04:56.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:55 vm01 bash[20698]: cluster 2026-03-09T17:04:54.609902+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:56.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:55 vm01 bash[20698]: cluster 2026-03-09T17:04:54.609902+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:57.153 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:04:57.360 INFO:teuthology.orchestra.run.vm01.stdout:34359738375 2026-03-09T17:04:57.360 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph osd last-stat-seq osd.0 2026-03-09T17:04:58.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:57 vm01 bash[20698]: cluster 2026-03-09T17:04:56.610117+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:04:58.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:57 vm01 bash[20698]: cluster 2026-03-09T17:04:56.610117+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:00.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:59 vm01 bash[20698]: cluster 2026-03-09T17:04:58.610390+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:00.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:04:59 vm01 bash[20698]: cluster 2026-03-09T17:04:58.610390+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:01.163 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:05:01.400 INFO:teuthology.orchestra.run.vm01.stdout:34359738376 2026-03-09T17:05:01.456 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738375 got 34359738376 for osd.0 2026-03-09T17:05:01.456 DEBUG:teuthology.parallel:result is None 2026-03-09T17:05:01.456 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T17:05:01.456 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph pg dump --format=json 2026-03-09T17:05:02.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:01 vm01 bash[20698]: cluster 2026-03-09T17:05:00.610625+0000 mgr.a (mgr.14150) 68 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:02.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:01 vm01 bash[20698]: cluster 2026-03-09T17:05:00.610625+0000 mgr.a (mgr.14150) 68 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:02.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:01 vm01 bash[20698]: audit 2026-03-09T17:05:01.400133+0000 mon.a (mon.0) 188 : audit [DBG] from='client.? 192.168.123.101:0/258338507' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T17:05:02.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:01 vm01 bash[20698]: audit 2026-03-09T17:05:01.400133+0000 mon.a (mon.0) 188 : audit [DBG] from='client.? 192.168.123.101:0/258338507' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T17:05:04.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:03 vm01 bash[20698]: cluster 2026-03-09T17:05:02.610904+0000 mgr.a (mgr.14150) 69 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:04.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:03 vm01 bash[20698]: cluster 2026-03-09T17:05:02.610904+0000 mgr.a (mgr.14150) 69 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:05.175 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:05:05.417 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:05:05.417 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-09T17:05:05.462 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":51,"stamp":"2026-03-09T17:05:04.611085+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":26920,"kb_used_data":80,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940504,"statfs":{"total":21470642176,"available":21443076096,"internally_reserved":0,"allocated":81920,"data_stored":16970,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[],"pool_stats":[],"osd_stats":[{"osd":0,"up_from":8,"seq":34359738377,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":26920,"kb_used_data":80,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940504,"statfs":{"total":21470642176,"available":21443076096,"internally_reserved":0,"allocated":81920,"data_stored":16970,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[]}} 2026-03-09T17:05:05.462 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph pg dump --format=json 2026-03-09T17:05:06.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:05 vm01 bash[20698]: cluster 2026-03-09T17:05:04.611169+0000 mgr.a (mgr.14150) 70 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:06.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:05 vm01 bash[20698]: cluster 2026-03-09T17:05:04.611169+0000 mgr.a (mgr.14150) 70 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:07.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:06 vm01 bash[20698]: audit 2026-03-09T17:05:05.417408+0000 mgr.a (mgr.14150) 71 : audit [DBG] from='client.14209 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:05:07.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:06 vm01 bash[20698]: audit 2026-03-09T17:05:05.417408+0000 mgr.a (mgr.14150) 71 : audit [DBG] from='client.14209 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:05:08.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:07 vm01 bash[20698]: cluster 2026-03-09T17:05:06.611398+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:08.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:07 vm01 bash[20698]: cluster 2026-03-09T17:05:06.611398+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:09.185 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:05:09.424 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:05:09.424 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-09T17:05:09.473 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":53,"stamp":"2026-03-09T17:05:08.611575+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":26920,"kb_used_data":80,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940504,"statfs":{"total":21470642176,"available":21443076096,"internally_reserved":0,"allocated":81920,"data_stored":16970,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[],"pool_stats":[],"osd_stats":[{"osd":0,"up_from":8,"seq":34359738377,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":26920,"kb_used_data":80,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940504,"statfs":{"total":21470642176,"available":21443076096,"internally_reserved":0,"allocated":81920,"data_stored":16970,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[]}} 2026-03-09T17:05:09.473 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T17:05:09.473 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T17:05:09.473 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T17:05:09.473 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph health --format=json 2026-03-09T17:05:10.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:09 vm01 bash[20698]: cluster 2026-03-09T17:05:08.611654+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:10.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:09 vm01 bash[20698]: cluster 2026-03-09T17:05:08.611654+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:11.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:10 vm01 bash[20698]: audit 2026-03-09T17:05:09.424115+0000 mgr.a (mgr.14150) 74 : audit [DBG] from='client.14211 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:05:11.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:10 vm01 bash[20698]: audit 2026-03-09T17:05:09.424115+0000 mgr.a (mgr.14150) 74 : audit [DBG] from='client.14211 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T17:05:12.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:11 vm01 bash[20698]: cluster 2026-03-09T17:05:10.611927+0000 mgr.a (mgr.14150) 75 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:12.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:11 vm01 bash[20698]: cluster 2026-03-09T17:05:10.611927+0000 mgr.a (mgr.14150) 75 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:13.197 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:05:13.456 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-09T17:05:13.456 INFO:teuthology.orchestra.run.vm01.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T17:05:13.502 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T17:05:13.502 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T17:05:13.502 INFO:teuthology.run_tasks:Running task workunit... 2026-03-09T17:05:13.506 INFO:tasks.workunit:Pulling workunits from ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T17:05:13.506 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-09T17:05:13.506 DEBUG:teuthology.orchestra.run.vm01:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-09T17:05:13.509 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T17:05:13.509 INFO:teuthology.orchestra.run.vm01.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-09T17:05:13.510 DEBUG:teuthology.orchestra.run.vm01:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T17:05:13.555 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-09T17:05:13.555 DEBUG:teuthology.orchestra.run.vm01:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-09T17:05:13.600 INFO:tasks.workunit:timeout=3h 2026-03-09T17:05:13.600 INFO:tasks.workunit:cleanup=True 2026-03-09T17:05:13.600 DEBUG:teuthology.orchestra.run.vm01:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T17:05:13.643 INFO:tasks.workunit.client.0.vm01.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-09T17:05:14.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:13 vm01 bash[20698]: cluster 2026-03-09T17:05:12.612188+0000 mgr.a (mgr.14150) 76 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:14.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:13 vm01 bash[20698]: cluster 2026-03-09T17:05:12.612188+0000 mgr.a (mgr.14150) 76 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:14.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:13 vm01 bash[20698]: audit 2026-03-09T17:05:13.456386+0000 mon.a (mon.0) 189 : audit [DBG] from='client.? 192.168.123.101:0/1327015298' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T17:05:14.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:13 vm01 bash[20698]: audit 2026-03-09T17:05:13.456386+0000 mon.a (mon.0) 189 : audit [DBG] from='client.? 192.168.123.101:0/1327015298' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T17:05:16.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:15 vm01 bash[20698]: cluster 2026-03-09T17:05:14.612456+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:16.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:15 vm01 bash[20698]: cluster 2026-03-09T17:05:14.612456+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:18.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:18 vm01 bash[20698]: cluster 2026-03-09T17:05:16.612647+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:18.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:18 vm01 bash[20698]: cluster 2026-03-09T17:05:16.612647+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:20.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:20 vm01 bash[20698]: cluster 2026-03-09T17:05:18.612884+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:20.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:20 vm01 bash[20698]: cluster 2026-03-09T17:05:18.612884+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:22.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:22 vm01 bash[20698]: cluster 2026-03-09T17:05:20.613140+0000 mgr.a (mgr.14150) 80 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:22.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:22 vm01 bash[20698]: cluster 2026-03-09T17:05:20.613140+0000 mgr.a (mgr.14150) 80 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:24.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:24 vm01 bash[20698]: cluster 2026-03-09T17:05:22.613418+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:24.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:24 vm01 bash[20698]: cluster 2026-03-09T17:05:22.613418+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:25.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:25 vm01 bash[20698]: cluster 2026-03-09T17:05:24.613648+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:25.906 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:25 vm01 bash[20698]: cluster 2026-03-09T17:05:24.613648+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:28.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:27 vm01 bash[20698]: cluster 2026-03-09T17:05:26.613831+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:28.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:27 vm01 bash[20698]: cluster 2026-03-09T17:05:26.613831+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:30.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:29 vm01 bash[20698]: cluster 2026-03-09T17:05:28.614042+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:30.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:29 vm01 bash[20698]: cluster 2026-03-09T17:05:28.614042+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:32.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:31 vm01 bash[20698]: cluster 2026-03-09T17:05:30.614275+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:32.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:31 vm01 bash[20698]: cluster 2026-03-09T17:05:30.614275+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:34.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:33 vm01 bash[20698]: cluster 2026-03-09T17:05:32.614487+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:34.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:33 vm01 bash[20698]: cluster 2026-03-09T17:05:32.614487+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:35.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:34 vm01 bash[20698]: audit 2026-03-09T17:05:33.942442+0000 mon.a (mon.0) 190 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:05:35.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:34 vm01 bash[20698]: audit 2026-03-09T17:05:33.942442+0000 mon.a (mon.0) 190 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:05:35.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:34 vm01 bash[20698]: audit 2026-03-09T17:05:34.395669+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:05:35.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:34 vm01 bash[20698]: audit 2026-03-09T17:05:34.395669+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:05:35.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:34 vm01 bash[20698]: audit 2026-03-09T17:05:34.396182+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:05:35.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:34 vm01 bash[20698]: audit 2026-03-09T17:05:34.396182+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:05:35.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:34 vm01 bash[20698]: audit 2026-03-09T17:05:34.399388+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:05:35.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:34 vm01 bash[20698]: audit 2026-03-09T17:05:34.399388+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:05:36.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:35 vm01 bash[20698]: cluster 2026-03-09T17:05:34.614699+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:36.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:35 vm01 bash[20698]: cluster 2026-03-09T17:05:34.614699+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:38.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:37 vm01 bash[20698]: cluster 2026-03-09T17:05:36.614905+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:38.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:37 vm01 bash[20698]: cluster 2026-03-09T17:05:36.614905+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:40.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:39 vm01 bash[20698]: cluster 2026-03-09T17:05:38.615134+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:40.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:39 vm01 bash[20698]: cluster 2026-03-09T17:05:38.615134+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:42.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:41 vm01 bash[20698]: cluster 2026-03-09T17:05:40.615347+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:42.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:41 vm01 bash[20698]: cluster 2026-03-09T17:05:40.615347+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:44.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:43 vm01 bash[20698]: cluster 2026-03-09T17:05:42.615576+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:44.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:43 vm01 bash[20698]: cluster 2026-03-09T17:05:42.615576+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:46.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:45 vm01 bash[20698]: cluster 2026-03-09T17:05:44.615806+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:46.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:45 vm01 bash[20698]: cluster 2026-03-09T17:05:44.615806+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:48.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:47 vm01 bash[20698]: cluster 2026-03-09T17:05:46.615996+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:48.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:47 vm01 bash[20698]: cluster 2026-03-09T17:05:46.615996+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:50.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:49 vm01 bash[20698]: cluster 2026-03-09T17:05:48.616210+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:50.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:49 vm01 bash[20698]: cluster 2026-03-09T17:05:48.616210+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:52.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:52 vm01 bash[20698]: cluster 2026-03-09T17:05:50.616394+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:52.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:52 vm01 bash[20698]: cluster 2026-03-09T17:05:50.616394+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:53.656 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:53 vm01 bash[20698]: cluster 2026-03-09T17:05:52.616673+0000 mgr.a (mgr.14150) 96 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:53.656 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:53 vm01 bash[20698]: cluster 2026-03-09T17:05:52.616673+0000 mgr.a (mgr.14150) 96 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr: git switch -c 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr:Or undo this operation with: 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr: git switch - 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-09T17:05:53.845 INFO:tasks.workunit.client.0.vm01.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T17:05:53.853 DEBUG:teuthology.orchestra.run.vm01:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-09T17:05:53.903 INFO:tasks.workunit.client.0.vm01.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-09T17:05:53.904 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T17:05:53.904 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-09T17:05:53.943 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-09T17:05:53.972 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-09T17:05:53.995 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T17:05:53.996 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T17:05:53.996 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-09T17:05:54.018 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T17:05:54.021 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-09T17:05:54.021 DEBUG:teuthology.orchestra.run.vm01:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-09T17:05:54.069 INFO:tasks.workunit:Running workunits matching cephadm/test_cephadm_timeout.py on client.0... 2026-03-09T17:05:54.069 INFO:tasks.workunit:Running workunit cephadm/test_cephadm_timeout.py... 2026-03-09T17:05:54.069 DEBUG:teuthology.orchestra.run.vm01:workunit test cephadm/test_cephadm_timeout.py> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm_timeout.py 2026-03-09T17:05:56.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:55 vm01 bash[20698]: cluster 2026-03-09T17:05:54.616946+0000 mgr.a (mgr.14150) 97 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:56.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:55 vm01 bash[20698]: cluster 2026-03-09T17:05:54.616946+0000 mgr.a (mgr.14150) 97 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:58.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:57 vm01 bash[20698]: cluster 2026-03-09T17:05:56.617202+0000 mgr.a (mgr.14150) 98 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:58.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:57 vm01 bash[20698]: cluster 2026-03-09T17:05:56.617202+0000 mgr.a (mgr.14150) 98 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:05:58.771 INFO:tasks.workunit.client.0.vm01.stderr:Inferring fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:06:00.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:59 vm01 bash[20698]: cluster 2026-03-09T17:05:58.617392+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:00.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:05:59 vm01 bash[20698]: cluster 2026-03-09T17:05:58.617392+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:02.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:01 vm01 bash[20698]: cluster 2026-03-09T17:06:00.617578+0000 mgr.a (mgr.14150) 100 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:02.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:01 vm01 bash[20698]: cluster 2026-03-09T17:06:00.617578+0000 mgr.a (mgr.14150) 100 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:02.782 INFO:tasks.workunit.client.0.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:06:04.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:03 vm01 bash[20698]: cluster 2026-03-09T17:06:02.617822+0000 mgr.a (mgr.14150) 101 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:04.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:03 vm01 bash[20698]: cluster 2026-03-09T17:06:02.617822+0000 mgr.a (mgr.14150) 101 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:06.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:05 vm01 bash[20698]: cluster 2026-03-09T17:06:04.618062+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:06.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:05 vm01 bash[20698]: cluster 2026-03-09T17:06:04.618062+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:06.804 INFO:tasks.workunit.client.0.vm01.stderr:Using ceph image with id '654f31e6858e' and tag 'e911bdebe5c8faa3800735d1568fcdca65db60df' created on 2026-02-25 18:57:17 +0000 UTC 2026-03-09T17:06:06.804 INFO:tasks.workunit.client.0.vm01.stderr:quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T17:06:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:07 vm01 bash[20698]: cluster 2026-03-09T17:06:06.618294+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:07 vm01 bash[20698]: cluster 2026-03-09T17:06:06.618294+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:07 vm01 bash[20698]: audit 2026-03-09T17:06:07.336075+0000 mon.a (mon.0) 194 : audit [INF] from='client.? 192.168.123.101:0/1204195466' entity='client.admin' 2026-03-09T17:06:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:07 vm01 bash[20698]: audit 2026-03-09T17:06:07.336075+0000 mon.a (mon.0) 194 : audit [INF] from='client.? 192.168.123.101:0/1204195466' entity='client.admin' 2026-03-09T17:06:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:07 vm01 bash[20698]: audit 2026-03-09T17:06:07.341596+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:06:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:07 vm01 bash[20698]: audit 2026-03-09T17:06:07.341596+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:06:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:07 vm01 bash[20698]: audit 2026-03-09T17:06:07.342635+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:06:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:07 vm01 bash[20698]: audit 2026-03-09T17:06:07.342635+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:06:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:07 vm01 bash[20698]: audit 2026-03-09T17:06:07.343104+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:06:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:07 vm01 bash[20698]: audit 2026-03-09T17:06:07.343104+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:06:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:07 vm01 bash[20698]: audit 2026-03-09T17:06:07.347150+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:06:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:07 vm01 bash[20698]: audit 2026-03-09T17:06:07.347150+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:06:10.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:09 vm01 bash[20698]: cluster 2026-03-09T17:06:08.618593+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:10.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:09 vm01 bash[20698]: cluster 2026-03-09T17:06:08.618593+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:12.020 INFO:tasks.workunit.client.0.vm01.stderr:Inferring fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:06:12.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:11 vm01 bash[20698]: cluster 2026-03-09T17:06:10.618881+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:12.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:11 vm01 bash[20698]: cluster 2026-03-09T17:06:10.618881+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:14.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:13 vm01 bash[20698]: cluster 2026-03-09T17:06:12.619138+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:14.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:13 vm01 bash[20698]: cluster 2026-03-09T17:06:12.619138+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:16.031 INFO:tasks.workunit.client.0.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:06:16.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:15 vm01 bash[20698]: cluster 2026-03-09T17:06:14.619434+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:16.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:15 vm01 bash[20698]: cluster 2026-03-09T17:06:14.619434+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:18.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:17 vm01 bash[20698]: cluster 2026-03-09T17:06:16.619708+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:18.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:17 vm01 bash[20698]: cluster 2026-03-09T17:06:16.619708+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:20.054 INFO:tasks.workunit.client.0.vm01.stderr:Using ceph image with id '654f31e6858e' and tag 'e911bdebe5c8faa3800735d1568fcdca65db60df' created on 2026-02-25 18:57:17 +0000 UTC 2026-03-09T17:06:20.055 INFO:tasks.workunit.client.0.vm01.stderr:quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T17:06:20.070 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:19 vm01 bash[20698]: cluster 2026-03-09T17:06:18.619963+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:20.070 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:19 vm01 bash[20698]: cluster 2026-03-09T17:06:18.619963+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:20.312 INFO:tasks.workunit.client.0.vm01.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-09T17:06:20.312 INFO:tasks.workunit.client.0.vm01.stdout:vm01 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 106s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-09T17:06:20.312 INFO:tasks.workunit.client.0.vm01.stdout:vm01 /dev/vdb hdd DWNBRSTVMM01001 20.0G Yes 106s ago 2026-03-09T17:06:20.312 INFO:tasks.workunit.client.0.vm01.stdout:vm01 /dev/vdc hdd DWNBRSTVMM01002 20.0G Yes 106s ago 2026-03-09T17:06:20.312 INFO:tasks.workunit.client.0.vm01.stdout:vm01 /dev/vdd hdd DWNBRSTVMM01003 20.0G Yes 106s ago 2026-03-09T17:06:20.312 INFO:tasks.workunit.client.0.vm01.stdout:vm01 /dev/vde hdd DWNBRSTVMM01004 20.0G No 106s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T17:06:21.068 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:20 vm01 bash[20698]: audit 2026-03-09T17:06:20.311423+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:06:21.068 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:20 vm01 bash[20698]: audit 2026-03-09T17:06:20.311423+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T17:06:21.068 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:20 vm01 bash[20698]: audit 2026-03-09T17:06:20.606926+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:06:21.068 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:20 vm01 bash[20698]: audit 2026-03-09T17:06:20.606926+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:06:21.068 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:20 vm01 bash[20698]: audit 2026-03-09T17:06:20.609764+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:06:21.068 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:20 vm01 bash[20698]: audit 2026-03-09T17:06:20.609764+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:06:22.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:21 vm01 bash[20698]: audit 2026-03-09T17:06:20.310116+0000 mgr.a (mgr.14150) 110 : audit [DBG] from='client.14217 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:06:22.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:21 vm01 bash[20698]: audit 2026-03-09T17:06:20.310116+0000 mgr.a (mgr.14150) 110 : audit [DBG] from='client.14217 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T17:06:22.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:21 vm01 bash[20698]: cluster 2026-03-09T17:06:20.620156+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:22.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:21 vm01 bash[20698]: cluster 2026-03-09T17:06:20.620156+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:24.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:23 vm01 bash[20698]: cluster 2026-03-09T17:06:22.620405+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:24.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:23 vm01 bash[20698]: cluster 2026-03-09T17:06:22.620405+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:26.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:25 vm01 bash[20698]: cluster 2026-03-09T17:06:24.620689+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v91: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:26.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:25 vm01 bash[20698]: cluster 2026-03-09T17:06:24.620689+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v91: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:28.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:27 vm01 bash[20698]: cluster 2026-03-09T17:06:26.620942+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v92: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:28.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:27 vm01 bash[20698]: cluster 2026-03-09T17:06:26.620942+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v92: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:30.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:29 vm01 bash[20698]: cluster 2026-03-09T17:06:28.621231+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v93: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:30.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:29 vm01 bash[20698]: cluster 2026-03-09T17:06:28.621231+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v93: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:32.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:31 vm01 bash[20698]: cluster 2026-03-09T17:06:30.621444+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v94: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:32.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:31 vm01 bash[20698]: cluster 2026-03-09T17:06:30.621444+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v94: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:34.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:33 vm01 bash[20698]: cluster 2026-03-09T17:06:32.621677+0000 mgr.a (mgr.14150) 117 : cluster [DBG] pgmap v95: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:34.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:33 vm01 bash[20698]: cluster 2026-03-09T17:06:32.621677+0000 mgr.a (mgr.14150) 117 : cluster [DBG] pgmap v95: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:36.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:35 vm01 bash[20698]: cluster 2026-03-09T17:06:34.621971+0000 mgr.a (mgr.14150) 118 : cluster [DBG] pgmap v96: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:36.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:35 vm01 bash[20698]: cluster 2026-03-09T17:06:34.621971+0000 mgr.a (mgr.14150) 118 : cluster [DBG] pgmap v96: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:38.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:37 vm01 bash[20698]: cluster 2026-03-09T17:06:36.622234+0000 mgr.a (mgr.14150) 119 : cluster [DBG] pgmap v97: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:38.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:37 vm01 bash[20698]: cluster 2026-03-09T17:06:36.622234+0000 mgr.a (mgr.14150) 119 : cluster [DBG] pgmap v97: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:40.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:39 vm01 bash[20698]: cluster 2026-03-09T17:06:38.622481+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v98: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:40.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:39 vm01 bash[20698]: cluster 2026-03-09T17:06:38.622481+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v98: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:42.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:41 vm01 bash[20698]: cluster 2026-03-09T17:06:40.622694+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v99: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:42.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:41 vm01 bash[20698]: cluster 2026-03-09T17:06:40.622694+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v99: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:44.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:43 vm01 bash[20698]: cluster 2026-03-09T17:06:42.622917+0000 mgr.a (mgr.14150) 122 : cluster [DBG] pgmap v100: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:44.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:43 vm01 bash[20698]: cluster 2026-03-09T17:06:42.622917+0000 mgr.a (mgr.14150) 122 : cluster [DBG] pgmap v100: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:46.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:45 vm01 bash[20698]: cluster 2026-03-09T17:06:44.623160+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v101: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:46.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:45 vm01 bash[20698]: cluster 2026-03-09T17:06:44.623160+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v101: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:48.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:47 vm01 bash[20698]: cluster 2026-03-09T17:06:46.623375+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v102: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:48.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:47 vm01 bash[20698]: cluster 2026-03-09T17:06:46.623375+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v102: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:50.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:49 vm01 bash[20698]: cluster 2026-03-09T17:06:48.623602+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v103: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:50.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:49 vm01 bash[20698]: cluster 2026-03-09T17:06:48.623602+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v103: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:52.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:51 vm01 bash[20698]: cluster 2026-03-09T17:06:50.623831+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v104: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:52.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:51 vm01 bash[20698]: cluster 2026-03-09T17:06:50.623831+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v104: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:54.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:53 vm01 bash[20698]: cluster 2026-03-09T17:06:52.624047+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v105: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:54.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:53 vm01 bash[20698]: cluster 2026-03-09T17:06:52.624047+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v105: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:56.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:55 vm01 bash[20698]: cluster 2026-03-09T17:06:54.624275+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v106: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:56.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:55 vm01 bash[20698]: cluster 2026-03-09T17:06:54.624275+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v106: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:58.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:57 vm01 bash[20698]: cluster 2026-03-09T17:06:56.624453+0000 mgr.a (mgr.14150) 129 : cluster [DBG] pgmap v107: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:06:58.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:57 vm01 bash[20698]: cluster 2026-03-09T17:06:56.624453+0000 mgr.a (mgr.14150) 129 : cluster [DBG] pgmap v107: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:00.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:59 vm01 bash[20698]: cluster 2026-03-09T17:06:58.624676+0000 mgr.a (mgr.14150) 130 : cluster [DBG] pgmap v108: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:00.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:06:59 vm01 bash[20698]: cluster 2026-03-09T17:06:58.624676+0000 mgr.a (mgr.14150) 130 : cluster [DBG] pgmap v108: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:02.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:01 vm01 bash[20698]: cluster 2026-03-09T17:07:00.624858+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v109: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:02.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:01 vm01 bash[20698]: cluster 2026-03-09T17:07:00.624858+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v109: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:04.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:03 vm01 bash[20698]: cluster 2026-03-09T17:07:02.625092+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v110: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:04.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:03 vm01 bash[20698]: cluster 2026-03-09T17:07:02.625092+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v110: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:06.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:05 vm01 bash[20698]: cluster 2026-03-09T17:07:04.625320+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v111: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:06.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:05 vm01 bash[20698]: cluster 2026-03-09T17:07:04.625320+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v111: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:08.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:07 vm01 bash[20698]: cluster 2026-03-09T17:07:06.625555+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v112: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:08.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:07 vm01 bash[20698]: cluster 2026-03-09T17:07:06.625555+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v112: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:10.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:09 vm01 bash[20698]: cluster 2026-03-09T17:07:08.625804+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v113: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:10.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:09 vm01 bash[20698]: cluster 2026-03-09T17:07:08.625804+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v113: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:12.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:11 vm01 bash[20698]: cluster 2026-03-09T17:07:10.626026+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v114: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:12.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:11 vm01 bash[20698]: cluster 2026-03-09T17:07:10.626026+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v114: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:14.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:13 vm01 bash[20698]: cluster 2026-03-09T17:07:12.626321+0000 mgr.a (mgr.14150) 137 : cluster [DBG] pgmap v115: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:14.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:13 vm01 bash[20698]: cluster 2026-03-09T17:07:12.626321+0000 mgr.a (mgr.14150) 137 : cluster [DBG] pgmap v115: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:16.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:15 vm01 bash[20698]: cluster 2026-03-09T17:07:14.626589+0000 mgr.a (mgr.14150) 138 : cluster [DBG] pgmap v116: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:16.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:15 vm01 bash[20698]: cluster 2026-03-09T17:07:14.626589+0000 mgr.a (mgr.14150) 138 : cluster [DBG] pgmap v116: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:18.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:17 vm01 bash[20698]: cluster 2026-03-09T17:07:16.626875+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v117: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:18.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:17 vm01 bash[20698]: cluster 2026-03-09T17:07:16.626875+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v117: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:20.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:19 vm01 bash[20698]: cluster 2026-03-09T17:07:18.627109+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v118: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:20.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:19 vm01 bash[20698]: cluster 2026-03-09T17:07:18.627109+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v118: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:22.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:21 vm01 bash[20698]: cluster 2026-03-09T17:07:20.627448+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v119: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:22.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:21 vm01 bash[20698]: cluster 2026-03-09T17:07:20.627448+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v119: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:24.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:23 vm01 bash[20698]: cluster 2026-03-09T17:07:22.627687+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v120: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:24.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:23 vm01 bash[20698]: cluster 2026-03-09T17:07:22.627687+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v120: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:26.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:25 vm01 bash[20698]: cluster 2026-03-09T17:07:24.627896+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v121: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:26.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:25 vm01 bash[20698]: cluster 2026-03-09T17:07:24.627896+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v121: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:28.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:27 vm01 bash[20698]: cluster 2026-03-09T17:07:26.628121+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v122: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:28.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:27 vm01 bash[20698]: cluster 2026-03-09T17:07:26.628121+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v122: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:30.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:29 vm01 bash[20698]: cluster 2026-03-09T17:07:28.628350+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v123: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:30.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:29 vm01 bash[20698]: cluster 2026-03-09T17:07:28.628350+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v123: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:32.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:31 vm01 bash[20698]: cluster 2026-03-09T17:07:30.628568+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v124: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:32.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:31 vm01 bash[20698]: cluster 2026-03-09T17:07:30.628568+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v124: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:34.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:33 vm01 bash[20698]: cluster 2026-03-09T17:07:32.628802+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v125: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:34.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:33 vm01 bash[20698]: cluster 2026-03-09T17:07:32.628802+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v125: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:36.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:35 vm01 bash[20698]: cluster 2026-03-09T17:07:34.629061+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v126: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:36.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:35 vm01 bash[20698]: cluster 2026-03-09T17:07:34.629061+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v126: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:38.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:37 vm01 bash[20698]: cluster 2026-03-09T17:07:36.629330+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v127: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:38.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:37 vm01 bash[20698]: cluster 2026-03-09T17:07:36.629330+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v127: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:40.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:39 vm01 bash[20698]: cluster 2026-03-09T17:07:38.629572+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v128: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:40.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:39 vm01 bash[20698]: cluster 2026-03-09T17:07:38.629572+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v128: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:42.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:41 vm01 bash[20698]: cluster 2026-03-09T17:07:40.629784+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v129: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:42.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:41 vm01 bash[20698]: cluster 2026-03-09T17:07:40.629784+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v129: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:44.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:43 vm01 bash[20698]: cluster 2026-03-09T17:07:42.630007+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v130: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:44.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:43 vm01 bash[20698]: cluster 2026-03-09T17:07:42.630007+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v130: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:46.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:45 vm01 bash[20698]: cluster 2026-03-09T17:07:44.630294+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v131: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:46.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:45 vm01 bash[20698]: cluster 2026-03-09T17:07:44.630294+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v131: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:48.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:47 vm01 bash[20698]: cluster 2026-03-09T17:07:46.630498+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v132: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:48.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:47 vm01 bash[20698]: cluster 2026-03-09T17:07:46.630498+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v132: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:50.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:49 vm01 bash[20698]: cluster 2026-03-09T17:07:48.630722+0000 mgr.a (mgr.14150) 155 : cluster [DBG] pgmap v133: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:50.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:49 vm01 bash[20698]: cluster 2026-03-09T17:07:48.630722+0000 mgr.a (mgr.14150) 155 : cluster [DBG] pgmap v133: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:52.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:51 vm01 bash[20698]: cluster 2026-03-09T17:07:50.630924+0000 mgr.a (mgr.14150) 156 : cluster [DBG] pgmap v134: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:52.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:51 vm01 bash[20698]: cluster 2026-03-09T17:07:50.630924+0000 mgr.a (mgr.14150) 156 : cluster [DBG] pgmap v134: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:54.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:53 vm01 bash[20698]: cluster 2026-03-09T17:07:52.631217+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v135: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:54.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:53 vm01 bash[20698]: cluster 2026-03-09T17:07:52.631217+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v135: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:56.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:55 vm01 bash[20698]: cluster 2026-03-09T17:07:54.631569+0000 mgr.a (mgr.14150) 158 : cluster [DBG] pgmap v136: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:56.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:55 vm01 bash[20698]: cluster 2026-03-09T17:07:54.631569+0000 mgr.a (mgr.14150) 158 : cluster [DBG] pgmap v136: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:58.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:57 vm01 bash[20698]: cluster 2026-03-09T17:07:56.631896+0000 mgr.a (mgr.14150) 159 : cluster [DBG] pgmap v137: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:07:58.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:57 vm01 bash[20698]: cluster 2026-03-09T17:07:56.631896+0000 mgr.a (mgr.14150) 159 : cluster [DBG] pgmap v137: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:00.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:59 vm01 bash[20698]: cluster 2026-03-09T17:07:58.632223+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v138: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:00.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:07:59 vm01 bash[20698]: cluster 2026-03-09T17:07:58.632223+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v138: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:02.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:01 vm01 bash[20698]: cluster 2026-03-09T17:08:00.632453+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v139: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:02.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:01 vm01 bash[20698]: cluster 2026-03-09T17:08:00.632453+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v139: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:04.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:03 vm01 bash[20698]: cluster 2026-03-09T17:08:02.632738+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v140: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:04.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:03 vm01 bash[20698]: cluster 2026-03-09T17:08:02.632738+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v140: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:06.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:05 vm01 bash[20698]: cluster 2026-03-09T17:08:04.633067+0000 mgr.a (mgr.14150) 163 : cluster [DBG] pgmap v141: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:06.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:05 vm01 bash[20698]: cluster 2026-03-09T17:08:04.633067+0000 mgr.a (mgr.14150) 163 : cluster [DBG] pgmap v141: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:08.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:07 vm01 bash[20698]: cluster 2026-03-09T17:08:06.633345+0000 mgr.a (mgr.14150) 164 : cluster [DBG] pgmap v142: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:08.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:07 vm01 bash[20698]: cluster 2026-03-09T17:08:06.633345+0000 mgr.a (mgr.14150) 164 : cluster [DBG] pgmap v142: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:10.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:09 vm01 bash[20698]: cluster 2026-03-09T17:08:08.633583+0000 mgr.a (mgr.14150) 165 : cluster [DBG] pgmap v143: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:10.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:09 vm01 bash[20698]: cluster 2026-03-09T17:08:08.633583+0000 mgr.a (mgr.14150) 165 : cluster [DBG] pgmap v143: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:12.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:11 vm01 bash[20698]: cluster 2026-03-09T17:08:10.633812+0000 mgr.a (mgr.14150) 166 : cluster [DBG] pgmap v144: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:12.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:11 vm01 bash[20698]: cluster 2026-03-09T17:08:10.633812+0000 mgr.a (mgr.14150) 166 : cluster [DBG] pgmap v144: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:14.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:13 vm01 bash[20698]: cluster 2026-03-09T17:08:12.634054+0000 mgr.a (mgr.14150) 167 : cluster [DBG] pgmap v145: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:14.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:13 vm01 bash[20698]: cluster 2026-03-09T17:08:12.634054+0000 mgr.a (mgr.14150) 167 : cluster [DBG] pgmap v145: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:16.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:15 vm01 bash[20698]: cluster 2026-03-09T17:08:14.634333+0000 mgr.a (mgr.14150) 168 : cluster [DBG] pgmap v146: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:16.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:15 vm01 bash[20698]: cluster 2026-03-09T17:08:14.634333+0000 mgr.a (mgr.14150) 168 : cluster [DBG] pgmap v146: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:18.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:17 vm01 bash[20698]: cluster 2026-03-09T17:08:16.634598+0000 mgr.a (mgr.14150) 169 : cluster [DBG] pgmap v147: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:18.157 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:17 vm01 bash[20698]: cluster 2026-03-09T17:08:16.634598+0000 mgr.a (mgr.14150) 169 : cluster [DBG] pgmap v147: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:20.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:19 vm01 bash[20698]: cluster 2026-03-09T17:08:18.634911+0000 mgr.a (mgr.14150) 170 : cluster [DBG] pgmap v148: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:20.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:19 vm01 bash[20698]: cluster 2026-03-09T17:08:18.634911+0000 mgr.a (mgr.14150) 170 : cluster [DBG] pgmap v148: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:21.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:20 vm01 bash[20698]: audit 2026-03-09T17:08:20.611122+0000 mon.a (mon.0) 202 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:08:21.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:20 vm01 bash[20698]: audit 2026-03-09T17:08:20.611122+0000 mon.a (mon.0) 202 : audit [DBG] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T17:08:21.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:20 vm01 bash[20698]: audit 2026-03-09T17:08:20.611771+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:08:21.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:20 vm01 bash[20698]: audit 2026-03-09T17:08:20.611771+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T17:08:21.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:20 vm01 bash[20698]: audit 2026-03-09T17:08:20.615213+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:08:21.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:20 vm01 bash[20698]: audit 2026-03-09T17:08:20.615213+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.101:0/513884187' entity='mgr.a' 2026-03-09T17:08:22.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:21 vm01 bash[20698]: cluster 2026-03-09T17:08:20.612733+0000 mgr.a (mgr.14150) 171 : cluster [DBG] pgmap v149: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:22.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:21 vm01 bash[20698]: cluster 2026-03-09T17:08:20.612733+0000 mgr.a (mgr.14150) 171 : cluster [DBG] pgmap v149: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:22.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:21 vm01 bash[20698]: cluster 2026-03-09T17:08:20.907631+0000 mon.a (mon.0) 205 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED) 2026-03-09T17:08:22.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:21 vm01 bash[20698]: cluster 2026-03-09T17:08:20.907631+0000 mon.a (mon.0) 205 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED) 2026-03-09T17:08:24.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:23 vm01 bash[20698]: cluster 2026-03-09T17:08:22.612983+0000 mgr.a (mgr.14150) 172 : cluster [DBG] pgmap v150: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:24.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:23 vm01 bash[20698]: cluster 2026-03-09T17:08:22.612983+0000 mgr.a (mgr.14150) 172 : cluster [DBG] pgmap v150: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:26.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:25 vm01 bash[20698]: cluster 2026-03-09T17:08:24.613282+0000 mgr.a (mgr.14150) 173 : cluster [DBG] pgmap v151: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:26.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:25 vm01 bash[20698]: cluster 2026-03-09T17:08:24.613282+0000 mgr.a (mgr.14150) 173 : cluster [DBG] pgmap v151: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:28.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:27 vm01 bash[20698]: cluster 2026-03-09T17:08:26.613538+0000 mgr.a (mgr.14150) 174 : cluster [DBG] pgmap v152: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:28.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:27 vm01 bash[20698]: cluster 2026-03-09T17:08:26.613538+0000 mgr.a (mgr.14150) 174 : cluster [DBG] pgmap v152: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:30.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:29 vm01 bash[20698]: cluster 2026-03-09T17:08:28.613777+0000 mgr.a (mgr.14150) 175 : cluster [DBG] pgmap v153: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:30.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:29 vm01 bash[20698]: cluster 2026-03-09T17:08:28.613777+0000 mgr.a (mgr.14150) 175 : cluster [DBG] pgmap v153: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:32.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:31 vm01 bash[20698]: cluster 2026-03-09T17:08:30.614008+0000 mgr.a (mgr.14150) 176 : cluster [DBG] pgmap v154: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:32.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:31 vm01 bash[20698]: cluster 2026-03-09T17:08:30.614008+0000 mgr.a (mgr.14150) 176 : cluster [DBG] pgmap v154: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:34.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:33 vm01 bash[20698]: cluster 2026-03-09T17:08:32.614246+0000 mgr.a (mgr.14150) 177 : cluster [DBG] pgmap v155: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:34.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:33 vm01 bash[20698]: cluster 2026-03-09T17:08:32.614246+0000 mgr.a (mgr.14150) 177 : cluster [DBG] pgmap v155: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:36.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:35 vm01 bash[20698]: cluster 2026-03-09T17:08:34.614521+0000 mgr.a (mgr.14150) 178 : cluster [DBG] pgmap v156: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:36.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:35 vm01 bash[20698]: cluster 2026-03-09T17:08:34.614521+0000 mgr.a (mgr.14150) 178 : cluster [DBG] pgmap v156: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:38.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:37 vm01 bash[20698]: cluster 2026-03-09T17:08:36.614775+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v157: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:38.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:37 vm01 bash[20698]: cluster 2026-03-09T17:08:36.614775+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v157: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:40.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:39 vm01 bash[20698]: cluster 2026-03-09T17:08:38.615020+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v158: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:40.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:39 vm01 bash[20698]: cluster 2026-03-09T17:08:38.615020+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v158: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:42.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:41 vm01 bash[20698]: cluster 2026-03-09T17:08:40.615275+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v159: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:42.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:41 vm01 bash[20698]: cluster 2026-03-09T17:08:40.615275+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v159: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:44.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:43 vm01 bash[20698]: cluster 2026-03-09T17:08:42.615546+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v160: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:44.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:43 vm01 bash[20698]: cluster 2026-03-09T17:08:42.615546+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v160: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:46.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:45 vm01 bash[20698]: cluster 2026-03-09T17:08:44.615808+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v161: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:46.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:45 vm01 bash[20698]: cluster 2026-03-09T17:08:44.615808+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v161: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:48.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:47 vm01 bash[20698]: cluster 2026-03-09T17:08:46.616061+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v162: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:48.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:47 vm01 bash[20698]: cluster 2026-03-09T17:08:46.616061+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v162: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:50.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:49 vm01 bash[20698]: cluster 2026-03-09T17:08:48.616328+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v163: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:50.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:49 vm01 bash[20698]: cluster 2026-03-09T17:08:48.616328+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v163: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:52.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:51 vm01 bash[20698]: cluster 2026-03-09T17:08:50.616590+0000 mgr.a (mgr.14150) 186 : cluster [DBG] pgmap v164: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:52.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:51 vm01 bash[20698]: cluster 2026-03-09T17:08:50.616590+0000 mgr.a (mgr.14150) 186 : cluster [DBG] pgmap v164: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:54.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:53 vm01 bash[20698]: cluster 2026-03-09T17:08:52.616850+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v165: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:54.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:53 vm01 bash[20698]: cluster 2026-03-09T17:08:52.616850+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v165: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:56.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:55 vm01 bash[20698]: cluster 2026-03-09T17:08:54.617109+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v166: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:56.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:55 vm01 bash[20698]: cluster 2026-03-09T17:08:54.617109+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v166: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:58.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:57 vm01 bash[20698]: cluster 2026-03-09T17:08:56.617495+0000 mgr.a (mgr.14150) 189 : cluster [DBG] pgmap v167: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:08:58.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:57 vm01 bash[20698]: cluster 2026-03-09T17:08:56.617495+0000 mgr.a (mgr.14150) 189 : cluster [DBG] pgmap v167: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:09:00.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:59 vm01 bash[20698]: cluster 2026-03-09T17:08:58.617730+0000 mgr.a (mgr.14150) 190 : cluster [DBG] pgmap v168: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:09:00.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:08:59 vm01 bash[20698]: cluster 2026-03-09T17:08:58.617730+0000 mgr.a (mgr.14150) 190 : cluster [DBG] pgmap v168: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:09:02.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:01 vm01 bash[20698]: cluster 2026-03-09T17:09:00.618011+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v169: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:09:02.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:01 vm01 bash[20698]: cluster 2026-03-09T17:09:00.618011+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v169: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:Looking for cluster fsid... 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:Found fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:Setting cephadm command timeout to 120... 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:Taking hold of cephadm lock for 300 seconds... 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:Triggering cephadm device refresh... 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:Sleeping 150 seconds to allow for timeout to occur... 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:Checking ceph health detail... 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:"cephadm shell -- ceph health detail" stdout: 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:HEALTH_WARN failed to probe daemons or devices 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:[WRN] CEPHADM_REFRESH_FAILED: failed to probe daemons or devices 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout: Command "cephadm ceph-volume -- inventory" timed out on host vm01 (default 120 second timeout) 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:"cephadm shell -- ceph health detail" stderr: 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:Inferring fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:Using ceph image with id '654f31e6858e' and tag 'e911bdebe5c8faa3800735d1568fcdca65db60df' created on 2026-02-25 18:57:17 +0000 UTC 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:Checking for correct health warning in health detail... 2026-03-09T17:09:03.522 INFO:tasks.workunit.client.0.vm01.stdout:Health warnings found succesfully. Exiting. 2026-03-09T17:09:03.526 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T17:09:03.526 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T17:09:03.536 INFO:tasks.workunit:Stopping ['cephadm/test_cephadm_timeout.py'] on client.0... 2026-03-09T17:09:03.536 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-09T17:09:04.065 DEBUG:teuthology.parallel:result is None 2026-03-09T17:09:04.065 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T17:09:04.073 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T17:09:04.073 DEBUG:teuthology.orchestra.run.vm01:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T17:09:04.120 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T17:09:04.120 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T17:09:04.124 INFO:tasks.cephadm:Teardown begin 2026-03-09T17:09:04.124 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T17:09:04.170 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T17:09:04.170 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab -- ceph mgr module disable cephadm 2026-03-09T17:09:04.381 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:04 vm01 bash[20698]: cluster 2026-03-09T17:09:02.618277+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v170: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:09:04.381 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:04 vm01 bash[20698]: cluster 2026-03-09T17:09:02.618277+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v170: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:09:04.381 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:04 vm01 bash[20698]: audit 2026-03-09T17:09:03.451331+0000 mon.a (mon.0) 206 : audit [DBG] from='client.? 192.168.123.101:0/4137758972' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T17:09:04.381 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:04 vm01 bash[20698]: audit 2026-03-09T17:09:03.451331+0000 mon.a (mon.0) 206 : audit [DBG] from='client.? 192.168.123.101:0/4137758972' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T17:09:06.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:06 vm01 bash[20698]: cluster 2026-03-09T17:09:04.618676+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v171: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:09:06.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:06 vm01 bash[20698]: cluster 2026-03-09T17:09:04.618676+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v171: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:09:08.406 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:08 vm01 bash[20698]: cluster 2026-03-09T17:09:06.618939+0000 mgr.a (mgr.14150) 194 : cluster [DBG] pgmap v172: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:09:08.407 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:08 vm01 bash[20698]: cluster 2026-03-09T17:09:06.618939+0000 mgr.a (mgr.14150) 194 : cluster [DBG] pgmap v172: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T17:09:08.878 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/mon.a/config 2026-03-09T17:09:09.030 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T17:09:09.028+0000 7f4d9f06e640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T17:09:09.030 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T17:09:09.028+0000 7f4d9f06e640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T17:09:09.030 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T17:09:09.028+0000 7f4d9f06e640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T17:09:09.030 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T17:09:09.028+0000 7f4d9f06e640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T17:09:09.030 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T17:09:09.028+0000 7f4d9f06e640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T17:09:09.030 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T17:09:09.028+0000 7f4d9f06e640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T17:09:09.030 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-09T17:09:09.028+0000 7f4d9f06e640 -1 monclient: keyring not found 2026-03-09T17:09:09.030 INFO:teuthology.orchestra.run.vm01.stderr:[errno 21] error connecting to the cluster 2026-03-09T17:09:09.082 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T17:09:09.082 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T17:09:09.082 DEBUG:teuthology.orchestra.run.vm01:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T17:09:09.085 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T17:09:09.085 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T17:09:09.085 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mon.a 2026-03-09T17:09:09.156 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:09 vm01 systemd[1]: Stopping Ceph mon.a for adad5454-1bd9-11f1-a78e-99ee5fbec3ab... 2026-03-09T17:09:09.349 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mon.a.service' 2026-03-09T17:09:09.349 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:09 vm01 bash[20698]: debug 2026-03-09T17:09:09.172+0000 7ff42101e640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T17:09:09.349 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:09 vm01 bash[20698]: debug 2026-03-09T17:09:09.172+0000 7ff42101e640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T17:09:09.349 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 09 17:09:09 vm01 bash[39153]: ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab-mon-a 2026-03-09T17:09:09.363 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T17:09:09.363 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T17:09:09.363 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-09T17:09:09.363 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mgr.a 2026-03-09T17:09:09.478 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 09 17:09:09 vm01 systemd[1]: Stopping Ceph mgr.a for adad5454-1bd9-11f1-a78e-99ee5fbec3ab... 2026-03-09T17:09:09.545 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@mgr.a.service' 2026-03-09T17:09:09.555 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T17:09:09.555 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-09T17:09:09.555 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T17:09:09.555 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@osd.0 2026-03-09T17:09:09.892 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 17:09:09 vm01 systemd[1]: Stopping Ceph osd.0 for adad5454-1bd9-11f1-a78e-99ee5fbec3ab... 2026-03-09T17:09:09.892 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 17:09:09 vm01 bash[30445]: debug 2026-03-09T17:09:09.640+0000 7f4a51e30640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T17:09:09.892 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 17:09:09 vm01 bash[30445]: debug 2026-03-09T17:09:09.640+0000 7f4a51e30640 -1 osd.0 9 *** Got signal Terminated *** 2026-03-09T17:09:09.892 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 17:09:09 vm01 bash[30445]: debug 2026-03-09T17:09:09.640+0000 7f4a51e30640 -1 osd.0 9 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T17:09:15.012 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 09 17:09:14 vm01 bash[39331]: ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab-osd-0 2026-03-09T17:09:15.050 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-adad5454-1bd9-11f1-a78e-99ee5fbec3ab@osd.0.service' 2026-03-09T17:09:15.073 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T17:09:15.074 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T17:09:15.074 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab --force --keep-logs 2026-03-09T17:09:15.219 INFO:teuthology.orchestra.run.vm01.stdout:Deleting cluster with fsid: adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:09:17.339 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T17:09:17.346 INFO:teuthology.orchestra.run.vm01.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-09T17:09:17.347 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T17:09:17.347 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T17:09:17.347 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/573/remote/vm01/crash 2026-03-09T17:09:17.347 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/crash -- . 2026-03-09T17:09:17.397 INFO:teuthology.orchestra.run.vm01.stderr:tar: /var/lib/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/crash: Cannot open: No such file or directory 2026-03-09T17:09:17.397 INFO:teuthology.orchestra.run.vm01.stderr:tar: Error is not recoverable: exiting now 2026-03-09T17:09:17.397 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T17:09:17.397 DEBUG:teuthology.orchestra.run.vm01:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_REFRESH_FAILED | head -n 1 2026-03-09T17:09:17.452 INFO:tasks.cephadm:Compressing logs... 2026-03-09T17:09:17.452 DEBUG:teuthology.orchestra.run.vm01:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T17:09:17.502 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T17:09:17.503 INFO:teuthology.orchestra.run.vm01.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T17:09:17.503 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph-mgr.a.log 2026-03-09T17:09:17.504 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/cephadm.log: 89.5% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T17:09:17.505 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph.log 2026-03-09T17:09:17.505 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph-mgr.a.log: gzip -5 --verbose -- /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph-mon.a.log 2026-03-09T17:09:17.506 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph.log: 87.4% -- replaced with /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph.log.gz 2026-03-09T17:09:17.506 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph.audit.log 2026-03-09T17:09:17.517 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph-volume.log 2026-03-09T17:09:17.518 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph.audit.log: 87.9% -- replaced with /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph.audit.log.gz 2026-03-09T17:09:17.525 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph.cephadm.log 2026-03-09T17:09:17.528 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph-volume.log: 95.8% -- replaced with /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph-volume.log.gz 2026-03-09T17:09:17.528 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph-osd.0.log 2026-03-09T17:09:17.528 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph.cephadm.log: 74.2% -- replaced with /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph.cephadm.log.gz 2026-03-09T17:09:17.540 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph-osd.0.log: 90.8% -- replaced with /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph-mgr.a.log.gz 2026-03-09T17:09:17.543 INFO:teuthology.orchestra.run.vm01.stderr: 94.5% -- replaced with /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph-osd.0.log.gz 2026-03-09T17:09:17.555 INFO:teuthology.orchestra.run.vm01.stderr: 92.2% -- replaced with /var/log/ceph/adad5454-1bd9-11f1-a78e-99ee5fbec3ab/ceph-mon.a.log.gz 2026-03-09T17:09:17.556 INFO:teuthology.orchestra.run.vm01.stderr: 2026-03-09T17:09:17.556 INFO:teuthology.orchestra.run.vm01.stderr:real 0m0.060s 2026-03-09T17:09:17.556 INFO:teuthology.orchestra.run.vm01.stderr:user 0m0.086s 2026-03-09T17:09:17.556 INFO:teuthology.orchestra.run.vm01.stderr:sys 0m0.016s 2026-03-09T17:09:17.556 INFO:tasks.cephadm:Archiving logs... 2026-03-09T17:09:17.556 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/573/remote/vm01/log 2026-03-09T17:09:17.556 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T17:09:17.610 INFO:tasks.cephadm:Removing cluster... 2026-03-09T17:09:17.610 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid adad5454-1bd9-11f1-a78e-99ee5fbec3ab --force 2026-03-09T17:09:17.756 INFO:teuthology.orchestra.run.vm01.stdout:Deleting cluster with fsid: adad5454-1bd9-11f1-a78e-99ee5fbec3ab 2026-03-09T17:09:18.993 INFO:tasks.cephadm:Removing cephadm ... 2026-03-09T17:09:18.994 DEBUG:teuthology.orchestra.run.vm01:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T17:09:18.997 INFO:tasks.cephadm:Teardown complete 2026-03-09T17:09:18.997 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T17:09:19.000 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T17:09:19.000 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T17:09:19.058 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T17:09:19.058 DEBUG:teuthology.orchestra.run.vm01:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T17:09:19.136 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:19.308 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:19.309 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:19.478 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:19.478 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T17:09:19.478 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T17:09:19.478 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:19.494 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T17:09:19.495 INFO:teuthology.orchestra.run.vm01.stdout: ceph* 2026-03-09T17:09:19.705 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T17:09:19.705 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T17:09:19.750 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-09T17:09:19.751 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:20.896 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:20.930 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:21.100 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:21.101 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:21.238 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:21.238 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T17:09:21.239 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T17:09:21.239 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:21.255 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T17:09:21.256 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T17:09:21.436 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T17:09:21.436 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T17:09:21.480 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-09T17:09:21.482 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:21.506 INFO:teuthology.orchestra.run.vm01.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:21.537 INFO:teuthology.orchestra.run.vm01.stdout:Looking for files to backup/remove ... 2026-03-09T17:09:21.538 INFO:teuthology.orchestra.run.vm01.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T17:09:21.540 INFO:teuthology.orchestra.run.vm01.stdout:Removing user `cephadm' ... 2026-03-09T17:09:21.541 INFO:teuthology.orchestra.run.vm01.stdout:Warning: group `nogroup' has no more members. 2026-03-09T17:09:21.550 INFO:teuthology.orchestra.run.vm01.stdout:Done. 2026-03-09T17:09:21.573 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T17:09:21.670 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T17:09:21.671 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:22.760 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:22.797 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:23.014 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:23.015 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:23.226 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:23.226 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T17:09:23.227 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T17:09:23.227 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:23.244 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T17:09:23.246 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds* 2026-03-09T17:09:23.444 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T17:09:23.444 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T17:09:23.504 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T17:09:23.507 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:23.959 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T17:09:24.063 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T17:09:24.065 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:25.628 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:25.667 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:25.896 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:25.896 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:26.147 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:26.147 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-09T17:09:26.147 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils-udev 2026-03-09T17:09:26.148 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:26.157 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T17:09:26.157 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T17:09:26.158 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-k8sevents* 2026-03-09T17:09:26.357 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T17:09:26.357 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T17:09:26.399 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T17:09:26.401 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:26.412 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:26.438 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:26.480 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:27.007 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T17:09:27.010 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:28.729 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:28.763 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:28.938 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:28.938 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:29.112 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:29.112 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:29.112 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T17:09:29.113 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:29.129 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T17:09:29.130 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T17:09:29.331 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T17:09:29.331 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T17:09:29.372 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T17:09:29.375 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:29.447 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:29.887 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:30.300 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:30.745 INFO:teuthology.orchestra.run.vm01.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:31.164 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:31.203 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:31.676 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T17:09:31.718 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T17:09:31.794 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-09T17:09:31.797 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:32.396 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:32.805 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:33.256 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:33.717 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:35.306 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:35.340 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:35.547 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:35.547 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:35.744 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:35.744 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:35.744 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T17:09:35.745 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:35.759 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T17:09:35.760 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse* 2026-03-09T17:09:35.948 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T17:09:35.948 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T17:09:35.983 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-09T17:09:35.984 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:36.359 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T17:09:36.459 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T17:09:36.461 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:37.952 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:37.987 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:38.146 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:38.146 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T17:09:38.243 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:38.268 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:09:38.268 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:38.301 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:38.472 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:38.473 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T17:09:38.574 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T17:09:38.575 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T17:09:38.575 INFO:teuthology.orchestra.run.vm01.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T17:09:38.575 INFO:teuthology.orchestra.run.vm01.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T17:09:38.575 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T17:09:38.575 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:38.589 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:09:38.589 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:38.622 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:38.792 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:38.792 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:38.959 INFO:teuthology.orchestra.run.vm01.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T17:09:38.959 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:38.959 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:38.960 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T17:09:38.960 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T17:09:38.960 INFO:teuthology.orchestra.run.vm01.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T17:09:38.960 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T17:09:38.960 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T17:09:38.960 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T17:09:38.960 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T17:09:38.960 INFO:teuthology.orchestra.run.vm01.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T17:09:38.960 INFO:teuthology.orchestra.run.vm01.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T17:09:38.960 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T17:09:38.960 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T17:09:38.960 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T17:09:38.961 INFO:teuthology.orchestra.run.vm01.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T17:09:38.961 INFO:teuthology.orchestra.run.vm01.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T17:09:38.961 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T17:09:38.961 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:38.983 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:09:38.983 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:39.016 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:39.221 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:39.222 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:39.349 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T17:09:39.350 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:09:39.351 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T17:09:39.351 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet zip 2026-03-09T17:09:39.351 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:39.364 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T17:09:39.364 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T17:09:39.527 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T17:09:39.528 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T17:09:39.559 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T17:09:39.561 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:39.571 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:39.580 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:40.784 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:40.819 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:41.029 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:41.030 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:41.240 INFO:teuthology.orchestra.run.vm01.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T17:09:41.240 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:41.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:41.240 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T17:09:41.240 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T17:09:41.240 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet zip 2026-03-09T17:09:41.241 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:41.267 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:09:41.267 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:41.301 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:41.480 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:41.480 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:41.609 INFO:teuthology.orchestra.run.vm01.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T17:09:41.609 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:41.609 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:41.609 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T17:09:41.609 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet zip 2026-03-09T17:09:41.610 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:41.629 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:09:41.629 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:41.664 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:41.853 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:41.853 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:42.007 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:42.007 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet zip 2026-03-09T17:09:42.008 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:42.019 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T17:09:42.019 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd* 2026-03-09T17:09:42.183 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T17:09:42.183 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T17:09:42.222 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-09T17:09:42.224 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:43.321 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:43.355 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:43.551 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:43.552 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:43.744 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:43.744 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:43.744 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T17:09:43.744 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet zip 2026-03-09T17:09:43.745 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:43.757 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T17:09:43.758 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-dev* libcephfs2* 2026-03-09T17:09:43.936 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T17:09:43.936 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T17:09:43.976 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-09T17:09:43.978 INFO:teuthology.orchestra.run.vm01.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:43.990 INFO:teuthology.orchestra.run.vm01.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:44.013 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T17:09:45.079 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:45.111 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:45.301 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:45.301 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:45.457 INFO:teuthology.orchestra.run.vm01.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T17:09:45.457 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:45.457 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:45.457 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T17:09:45.457 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet zip 2026-03-09T17:09:45.458 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:45.480 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:09:45.480 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:45.511 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:45.709 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:45.709 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:45.829 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:45.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:45.829 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T17:09:45.829 INFO:teuthology.orchestra.run.vm01.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T17:09:45.829 INFO:teuthology.orchestra.run.vm01.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T17:09:45.829 INFO:teuthology.orchestra.run.vm01.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T17:09:45.829 INFO:teuthology.orchestra.run.vm01.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T17:09:45.829 INFO:teuthology.orchestra.run.vm01.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:09:45.829 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:09:45.829 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T17:09:45.829 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T17:09:45.829 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T17:09:45.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T17:09:45.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T17:09:45.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T17:09:45.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T17:09:45.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T17:09:45.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T17:09:45.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:09:45.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T17:09:45.830 INFO:teuthology.orchestra.run.vm01.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T17:09:45.830 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:45.838 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T17:09:45.839 INFO:teuthology.orchestra.run.vm01.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T17:09:45.839 INFO:teuthology.orchestra.run.vm01.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T17:09:46.006 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T17:09:46.006 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T17:09:46.043 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-09T17:09:46.046 INFO:teuthology.orchestra.run.vm01.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:46.059 INFO:teuthology.orchestra.run.vm01.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:46.072 INFO:teuthology.orchestra.run.vm01.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:46.083 INFO:teuthology.orchestra.run.vm01.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T17:09:46.483 INFO:teuthology.orchestra.run.vm01.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:46.496 INFO:teuthology.orchestra.run.vm01.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:46.509 INFO:teuthology.orchestra.run.vm01.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:46.535 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T17:09:46.575 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T17:09:46.668 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T17:09:46.671 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T17:09:48.191 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:48.227 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:48.424 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:48.425 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:48.605 INFO:teuthology.orchestra.run.vm01.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T17:09:48.605 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:48.605 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:48.605 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T17:09:48.605 INFO:teuthology.orchestra.run.vm01.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T17:09:48.605 INFO:teuthology.orchestra.run.vm01.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T17:09:48.606 INFO:teuthology.orchestra.run.vm01.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T17:09:48.606 INFO:teuthology.orchestra.run.vm01.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T17:09:48.607 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:48.631 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:09:48.632 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:48.664 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:48.874 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:48.874 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:49.030 INFO:teuthology.orchestra.run.vm01.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T17:09:49.030 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T17:09:49.030 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:49.030 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T17:09:49.030 INFO:teuthology.orchestra.run.vm01.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T17:09:49.030 INFO:teuthology.orchestra.run.vm01.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T17:09:49.031 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T17:09:49.051 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T17:09:49.051 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:49.052 DEBUG:teuthology.orchestra.run.vm01:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T17:09:49.109 DEBUG:teuthology.orchestra.run.vm01:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T17:09:49.191 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:49.371 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-09T17:09:49.372 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-09T17:09:49.503 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-09T17:09:49.503 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T17:09:49.503 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T17:09:49.503 INFO:teuthology.orchestra.run.vm01.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T17:09:49.503 INFO:teuthology.orchestra.run.vm01.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T17:09:49.504 INFO:teuthology.orchestra.run.vm01.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T17:09:49.504 INFO:teuthology.orchestra.run.vm01.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T17:09:49.504 INFO:teuthology.orchestra.run.vm01.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T17:09:49.504 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T17:09:49.504 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T17:09:49.504 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T17:09:49.505 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T17:09:49.505 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T17:09:49.505 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T17:09:49.505 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T17:09:49.505 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T17:09:49.505 INFO:teuthology.orchestra.run.vm01.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T17:09:49.505 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T17:09:49.505 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T17:09:49.505 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T17:09:49.505 INFO:teuthology.orchestra.run.vm01.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T17:09:49.661 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-09T17:09:49.661 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 107 MB disk space will be freed. 2026-03-09T17:09:49.700 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T17:09:49.703 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:49.720 INFO:teuthology.orchestra.run.vm01.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T17:09:49.731 INFO:teuthology.orchestra.run.vm01.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T17:09:49.744 INFO:teuthology.orchestra.run.vm01.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T17:09:49.756 INFO:teuthology.orchestra.run.vm01.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T17:09:49.767 INFO:teuthology.orchestra.run.vm01.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T17:09:49.779 INFO:teuthology.orchestra.run.vm01.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:09:49.790 INFO:teuthology.orchestra.run.vm01.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:09:49.802 INFO:teuthology.orchestra.run.vm01.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T17:09:49.821 INFO:teuthology.orchestra.run.vm01.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T17:09:49.834 INFO:teuthology.orchestra.run.vm01.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T17:09:49.845 INFO:teuthology.orchestra.run.vm01.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T17:09:49.857 INFO:teuthology.orchestra.run.vm01.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T17:09:49.868 INFO:teuthology.orchestra.run.vm01.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T17:09:49.878 INFO:teuthology.orchestra.run.vm01.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T17:09:49.890 INFO:teuthology.orchestra.run.vm01.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T17:09:49.901 INFO:teuthology.orchestra.run.vm01.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T17:09:49.911 INFO:teuthology.orchestra.run.vm01.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T17:09:49.922 INFO:teuthology.orchestra.run.vm01.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T17:09:49.946 INFO:teuthology.orchestra.run.vm01.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T17:09:49.957 INFO:teuthology.orchestra.run.vm01.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T17:09:49.967 INFO:teuthology.orchestra.run.vm01.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T17:09:49.978 INFO:teuthology.orchestra.run.vm01.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T17:09:49.989 INFO:teuthology.orchestra.run.vm01.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T17:09:50.002 INFO:teuthology.orchestra.run.vm01.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T17:09:50.013 INFO:teuthology.orchestra.run.vm01.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T17:09:50.024 INFO:teuthology.orchestra.run.vm01.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T17:09:50.036 INFO:teuthology.orchestra.run.vm01.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-09T17:09:50.044 INFO:teuthology.orchestra.run.vm01.stdout:update-initramfs: deferring update (trigger activated) 2026-03-09T17:09:50.054 INFO:teuthology.orchestra.run.vm01.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-09T17:09:50.072 INFO:teuthology.orchestra.run.vm01.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-09T17:09:50.083 INFO:teuthology.orchestra.run.vm01.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T17:09:50.095 INFO:teuthology.orchestra.run.vm01.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T17:09:50.108 INFO:teuthology.orchestra.run.vm01.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T17:09:50.123 INFO:teuthology.orchestra.run.vm01.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T17:09:50.143 INFO:teuthology.orchestra.run.vm01.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T17:09:50.527 INFO:teuthology.orchestra.run.vm01.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T17:09:50.558 INFO:teuthology.orchestra.run.vm01.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T17:09:50.581 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T17:09:50.636 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T17:09:50.685 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T17:09:50.742 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T17:09:50.793 INFO:teuthology.orchestra.run.vm01.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T17:09:50.803 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T17:09:50.860 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T17:09:51.124 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T17:09:51.175 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T17:09:51.221 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:51.268 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T17:09:51.317 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T17:09:51.373 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T17:09:51.436 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T17:09:51.481 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T17:09:51.528 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T17:09:51.576 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T17:09:51.627 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T17:09:51.685 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T17:09:51.733 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T17:09:51.850 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T17:09:51.909 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T17:09:51.956 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T17:09:52.006 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T17:09:52.055 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T17:09:52.110 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T17:09:52.156 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T17:09:52.205 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T17:09:52.254 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T17:09:52.304 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T17:09:52.350 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T17:09:52.398 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T17:09:52.446 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T17:09:52.492 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T17:09:52.542 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T17:09:52.589 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T17:09:52.616 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T17:09:52.662 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T17:09:52.708 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T17:09:52.755 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T17:09:52.802 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T17:09:52.849 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T17:09:52.896 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T17:09:52.946 INFO:teuthology.orchestra.run.vm01.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T17:09:52.990 INFO:teuthology.orchestra.run.vm01.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T17:09:53.012 INFO:teuthology.orchestra.run.vm01.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T17:09:53.389 INFO:teuthology.orchestra.run.vm01.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T17:09:53.401 INFO:teuthology.orchestra.run.vm01.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T17:09:53.420 INFO:teuthology.orchestra.run.vm01.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T17:09:53.438 INFO:teuthology.orchestra.run.vm01.stdout:Removing zip (3.0-12build2) ... 2026-03-09T17:09:53.462 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T17:09:53.471 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T17:09:53.516 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T17:09:53.523 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-09T17:09:53.540 INFO:teuthology.orchestra.run.vm01.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-09T17:09:55.030 INFO:teuthology.orchestra.run.vm01.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-09T17:09:55.031 INFO:teuthology.orchestra.run.vm01.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-09T17:09:56.981 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T17:09:56.984 DEBUG:teuthology.parallel:result is None 2026-03-09T17:09:56.984 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm01.local 2026-03-09T17:09:56.984 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T17:09:57.035 DEBUG:teuthology.orchestra.run.vm01:> sudo apt-get update 2026-03-09T17:09:57.208 INFO:teuthology.orchestra.run.vm01.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T17:09:57.211 INFO:teuthology.orchestra.run.vm01.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T17:09:57.219 INFO:teuthology.orchestra.run.vm01.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T17:09:57.297 INFO:teuthology.orchestra.run.vm01.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T17:09:58.171 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-09T17:09:58.183 DEBUG:teuthology.parallel:result is None 2026-03-09T17:09:58.183 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T17:09:58.185 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T17:09:58.185 DEBUG:teuthology.orchestra.run.vm01:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T17:09:58.664 INFO:teuthology.orchestra.run.vm01.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T17:09:58.664 INFO:teuthology.orchestra.run.vm01.stdout:============================================================================== 2026-03-09T17:09:58.664 INFO:teuthology.orchestra.run.vm01.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:09:58.664 INFO:teuthology.orchestra.run.vm01.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:09:58.664 INFO:teuthology.orchestra.run.vm01.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:09:58.664 INFO:teuthology.orchestra.run.vm01.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:09:58.664 INFO:teuthology.orchestra.run.vm01.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T17:09:58.664 INFO:teuthology.orchestra.run.vm01.stdout:+formularfetisch 131.188.3.220 2 u 41 64 377 25.109 +0.071 1.829 2026-03-09T17:09:58.664 INFO:teuthology.orchestra.run.vm01.stdout:+x.ns.gin.ntt.ne 129.250.35.222 2 u 42 64 377 20.364 +0.075 2.163 2026-03-09T17:09:58.664 INFO:teuthology.orchestra.run.vm01.stdout:+pve2.h4x-gamers 192.53.103.108 2 u 46 64 377 24.996 +6.046 5.624 2026-03-09T17:09:58.664 INFO:teuthology.orchestra.run.vm01.stdout:+139-162-156-95. 80.192.165.246 2 u 42 64 377 22.782 -3.658 1.742 2026-03-09T17:09:58.664 INFO:teuthology.orchestra.run.vm01.stdout:#185.232.69.65 ( .PHC0. 1 u 40 64 377 28.282 -2.217 1.256 2026-03-09T17:09:58.665 INFO:teuthology.orchestra.run.vm01.stdout:#stratum2-1.NTP. 129.70.137.82 2 u 61 64 47 28.054 +9.863 7.491 2026-03-09T17:09:58.665 INFO:teuthology.orchestra.run.vm01.stdout:+mail.sassmann.n 192.53.103.103 2 u 36 64 377 23.592 +0.481 1.257 2026-03-09T17:09:58.665 INFO:teuthology.orchestra.run.vm01.stdout:*ntp3.rrze.uni-e .PZFs. 1 u 40 64 377 26.000 +5.524 6.150 2026-03-09T17:09:58.665 INFO:teuthology.orchestra.run.vm01.stdout:+ntp2.lwlcom.net .GPS. 1 u 38 64 377 30.854 +3.668 1.728 2026-03-09T17:09:58.665 INFO:teuthology.orchestra.run.vm01.stdout:#ip217-154-182-6 37.15.221.189 2 u 49 64 377 66.707 -5.416 1.782 2026-03-09T17:09:58.665 INFO:teuthology.orchestra.run.vm01.stdout:+obelix.hetzner. 213.239.239.166 3 u 38 64 377 25.016 +2.312 2.128 2026-03-09T17:09:58.665 INFO:teuthology.orchestra.run.vm01.stdout:+web35.weingaert 130.149.17.21 2 u 34 64 377 27.907 +2.814 0.884 2026-03-09T17:09:58.665 INFO:teuthology.orchestra.run.vm01.stdout:+185.13.148.71 79.133.44.146 2 u 40 64 377 31.941 +0.390 1.369 2026-03-09T17:09:58.665 INFO:teuthology.orchestra.run.vm01.stdout:#185.125.190.57 194.121.207.249 2 u 49 64 377 33.284 +1.567 2.958 2026-03-09T17:09:58.665 INFO:teuthology.orchestra.run.vm01.stdout:+ntp2.uni-ulm.de 129.69.253.1 2 u 40 64 377 27.306 -0.743 1.258 2026-03-09T17:09:58.665 INFO:teuthology.orchestra.run.vm01.stdout:+82.165.178.31 33.40.230.73 2 u 41 64 377 27.196 +2.455 3.215 2026-03-09T17:09:58.665 INFO:teuthology.orchestra.run.vm01.stdout:#185.125.190.56 79.243.60.50 2 u 54 64 377 33.267 +1.213 3.327 2026-03-09T17:09:58.665 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T17:09:58.667 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T17:09:58.667 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T17:09:58.669 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T17:09:58.671 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T17:09:58.673 INFO:teuthology.task.internal:Duration was 644.815406 seconds 2026-03-09T17:09:58.673 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T17:09:58.675 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T17:09:58.675 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T17:09:58.698 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T17:09:58.698 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm01.local 2026-03-09T17:09:58.698 DEBUG:teuthology.orchestra.run.vm01:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T17:09:58.751 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T17:09:58.751 DEBUG:teuthology.orchestra.run.vm01:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T17:09:58.823 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T17:09:58.823 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T17:09:58.872 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T17:09:58.872 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T17:09:58.872 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0%gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T17:09:58.872 INFO:teuthology.orchestra.run.vm01.stderr: -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T17:09:58.872 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T17:09:58.878 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 87.6% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T17:09:58.879 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T17:09:58.881 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T17:09:58.881 DEBUG:teuthology.orchestra.run.vm01:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T17:09:58.927 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T17:09:58.930 DEBUG:teuthology.orchestra.run.vm01:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T17:09:58.975 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = core 2026-03-09T17:09:58.982 DEBUG:teuthology.orchestra.run.vm01:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T17:09:59.027 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T17:09:59.027 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T17:09:59.029 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T17:09:59.029 DEBUG:teuthology.misc:Transferring archived files from vm01:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/573/remote/vm01 2026-03-09T17:09:59.030 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T17:09:59.076 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T17:09:59.076 DEBUG:teuthology.orchestra.run.vm01:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T17:09:59.123 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T17:09:59.126 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T17:09:59.126 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T17:09:59.128 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T17:09:59.128 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T17:09:59.167 INFO:teuthology.orchestra.run.vm01.stdout: 258076 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 17:09 /home/ubuntu/cephtest 2026-03-09T17:09:59.168 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T17:09:59.173 INFO:teuthology.run:Summary data: description: orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_cephadm_timeout} duration: 644.8154058456421 flavor: default owner: kyr success: true 2026-03-09T17:09:59.173 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T17:09:59.190 INFO:teuthology.run:pass