2026-03-10T12:38:46.280 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T12:38:46.283 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T12:38:46.344 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1033 branch: squid description: orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 1-start 2-services/basic 3-final} email: null first_in_suite: false flavor: default job_id: '1033' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_DAEMON_PLACE_FAIL - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - client.0 - - host.b - client.1 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm06.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM3mieSDFj2gVt2MTPcgXs2zQvqxMXdk168PVAXnMjVcwFYQTbKdj/edstmd2gMQizLLCTgMW7F1QM2QIslBEAg= vm09.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDAHFQzu3KQWaOtTJMjbSCHifMohgWMkJ/lLTPUgZNj7vH76k9oyFUC33bJNQAWsT78UOR1kX7qJUKBlu9WfB3I= tasks: - cephadm: roleless: true - cephadm.shell: host.a: - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - cephadm.shell: host.a: - stat -c '%u %g' /var/log/ceph | grep '167 167' - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - ceph orch ls | grep '^osd.all-available-devices ' teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T12:38:46.349 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T12:38:46.350 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T12:38:46.350 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T12:38:46.350 INFO:teuthology.task.internal:Checking packages... 2026-03-10T12:38:46.350 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T12:38:46.350 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T12:38:46.350 INFO:teuthology.packaging:ref: None 2026-03-10T12:38:46.350 INFO:teuthology.packaging:tag: None 2026-03-10T12:38:46.350 INFO:teuthology.packaging:branch: squid 2026-03-10T12:38:46.350 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T12:38:46.350 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-10T12:38:47.018 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-10T12:38:47.020 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T12:38:47.021 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T12:38:47.021 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T12:38:47.021 INFO:teuthology.task.internal:Saving configuration 2026-03-10T12:38:47.025 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T12:38:47.026 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T12:38:47.033 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm06.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1033', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 12:37:37.007504', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:06', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM3mieSDFj2gVt2MTPcgXs2zQvqxMXdk168PVAXnMjVcwFYQTbKdj/edstmd2gMQizLLCTgMW7F1QM2QIslBEAg='} 2026-03-10T12:38:47.038 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm09.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1033', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 12:37:37.006929', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:09', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDAHFQzu3KQWaOtTJMjbSCHifMohgWMkJ/lLTPUgZNj7vH76k9oyFUC33bJNQAWsT78UOR1kX7qJUKBlu9WfB3I='} 2026-03-10T12:38:47.039 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T12:38:47.039 INFO:teuthology.task.internal:roles: ubuntu@vm06.local - ['host.a', 'client.0'] 2026-03-10T12:38:47.039 INFO:teuthology.task.internal:roles: ubuntu@vm09.local - ['host.b', 'client.1'] 2026-03-10T12:38:47.039 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T12:38:47.045 DEBUG:teuthology.task.console_log:vm06 does not support IPMI; excluding 2026-03-10T12:38:47.052 DEBUG:teuthology.task.console_log:vm09 does not support IPMI; excluding 2026-03-10T12:38:47.052 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f236571a290>, signals=[15]) 2026-03-10T12:38:47.052 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T12:38:47.053 INFO:teuthology.task.internal:Opening connections... 2026-03-10T12:38:47.053 DEBUG:teuthology.task.internal:connecting to ubuntu@vm06.local 2026-03-10T12:38:47.054 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T12:38:47.110 DEBUG:teuthology.task.internal:connecting to ubuntu@vm09.local 2026-03-10T12:38:47.111 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T12:38:47.171 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T12:38:47.172 DEBUG:teuthology.orchestra.run.vm06:> uname -m 2026-03-10T12:38:47.200 INFO:teuthology.orchestra.run.vm06.stdout:x86_64 2026-03-10T12:38:47.200 DEBUG:teuthology.orchestra.run.vm06:> cat /etc/os-release 2026-03-10T12:38:47.246 INFO:teuthology.orchestra.run.vm06.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T12:38:47.246 INFO:teuthology.orchestra.run.vm06.stdout:NAME="Ubuntu" 2026-03-10T12:38:47.246 INFO:teuthology.orchestra.run.vm06.stdout:VERSION_ID="22.04" 2026-03-10T12:38:47.246 INFO:teuthology.orchestra.run.vm06.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T12:38:47.246 INFO:teuthology.orchestra.run.vm06.stdout:VERSION_CODENAME=jammy 2026-03-10T12:38:47.246 INFO:teuthology.orchestra.run.vm06.stdout:ID=ubuntu 2026-03-10T12:38:47.246 INFO:teuthology.orchestra.run.vm06.stdout:ID_LIKE=debian 2026-03-10T12:38:47.246 INFO:teuthology.orchestra.run.vm06.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T12:38:47.246 INFO:teuthology.orchestra.run.vm06.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T12:38:47.246 INFO:teuthology.orchestra.run.vm06.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T12:38:47.246 INFO:teuthology.orchestra.run.vm06.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T12:38:47.246 INFO:teuthology.orchestra.run.vm06.stdout:UBUNTU_CODENAME=jammy 2026-03-10T12:38:47.246 INFO:teuthology.lock.ops:Updating vm06.local on lock server 2026-03-10T12:38:47.251 DEBUG:teuthology.orchestra.run.vm09:> uname -m 2026-03-10T12:38:47.255 INFO:teuthology.orchestra.run.vm09.stdout:x86_64 2026-03-10T12:38:47.255 DEBUG:teuthology.orchestra.run.vm09:> cat /etc/os-release 2026-03-10T12:38:47.301 INFO:teuthology.orchestra.run.vm09.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T12:38:47.301 INFO:teuthology.orchestra.run.vm09.stdout:NAME="Ubuntu" 2026-03-10T12:38:47.301 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_ID="22.04" 2026-03-10T12:38:47.301 INFO:teuthology.orchestra.run.vm09.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T12:38:47.301 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_CODENAME=jammy 2026-03-10T12:38:47.301 INFO:teuthology.orchestra.run.vm09.stdout:ID=ubuntu 2026-03-10T12:38:47.301 INFO:teuthology.orchestra.run.vm09.stdout:ID_LIKE=debian 2026-03-10T12:38:47.301 INFO:teuthology.orchestra.run.vm09.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T12:38:47.301 INFO:teuthology.orchestra.run.vm09.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T12:38:47.301 INFO:teuthology.orchestra.run.vm09.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T12:38:47.301 INFO:teuthology.orchestra.run.vm09.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T12:38:47.301 INFO:teuthology.orchestra.run.vm09.stdout:UBUNTU_CODENAME=jammy 2026-03-10T12:38:47.301 INFO:teuthology.lock.ops:Updating vm09.local on lock server 2026-03-10T12:38:47.306 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T12:38:47.308 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T12:38:47.308 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T12:38:47.309 DEBUG:teuthology.orchestra.run.vm06:> test '!' -e /home/ubuntu/cephtest 2026-03-10T12:38:47.310 DEBUG:teuthology.orchestra.run.vm09:> test '!' -e /home/ubuntu/cephtest 2026-03-10T12:38:47.345 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T12:38:47.346 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T12:38:47.346 DEBUG:teuthology.orchestra.run.vm06:> test -z $(ls -A /var/lib/ceph) 2026-03-10T12:38:47.355 DEBUG:teuthology.orchestra.run.vm09:> test -z $(ls -A /var/lib/ceph) 2026-03-10T12:38:47.358 INFO:teuthology.orchestra.run.vm06.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T12:38:47.389 INFO:teuthology.orchestra.run.vm09.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T12:38:47.390 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T12:38:47.398 DEBUG:teuthology.orchestra.run.vm06:> test -e /ceph-qa-ready 2026-03-10T12:38:47.401 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:38:47.642 DEBUG:teuthology.orchestra.run.vm09:> test -e /ceph-qa-ready 2026-03-10T12:38:47.645 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:38:47.975 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T12:38:47.976 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T12:38:47.976 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T12:38:47.977 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T12:38:47.980 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T12:38:47.982 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T12:38:47.983 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T12:38:47.983 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T12:38:48.023 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T12:38:48.027 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T12:38:48.029 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T12:38:48.029 DEBUG:teuthology.orchestra.run.vm06:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T12:38:48.069 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:38:48.069 DEBUG:teuthology.orchestra.run.vm09:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T12:38:48.071 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:38:48.071 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T12:38:48.111 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T12:38:48.119 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T12:38:48.121 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T12:38:48.125 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T12:38:48.125 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T12:38:48.126 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T12:38:48.127 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T12:38:48.128 DEBUG:teuthology.orchestra.run.vm06:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T12:38:48.167 DEBUG:teuthology.orchestra.run.vm09:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T12:38:48.176 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T12:38:48.178 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T12:38:48.178 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T12:38:48.215 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T12:38:48.220 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T12:38:48.260 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T12:38:48.304 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T12:38:48.304 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T12:38:48.353 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T12:38:48.356 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T12:38:48.400 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:38:48.400 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T12:38:48.448 DEBUG:teuthology.orchestra.run.vm06:> sudo service rsyslog restart 2026-03-10T12:38:48.449 DEBUG:teuthology.orchestra.run.vm09:> sudo service rsyslog restart 2026-03-10T12:38:48.504 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T12:38:48.506 INFO:teuthology.task.internal:Starting timer... 2026-03-10T12:38:48.506 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T12:38:48.509 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T12:38:48.511 INFO:teuthology.task.selinux:Excluding vm06: VMs are not yet supported 2026-03-10T12:38:48.511 INFO:teuthology.task.selinux:Excluding vm09: VMs are not yet supported 2026-03-10T12:38:48.511 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T12:38:48.511 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T12:38:48.511 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T12:38:48.511 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T12:38:48.512 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T12:38:48.513 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T12:38:48.514 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T12:38:49.231 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T12:38:49.236 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T12:38:49.237 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventory1yzl68k9 --limit vm06.local,vm09.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T12:40:58.939 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm06.local'), Remote(name='ubuntu@vm09.local')] 2026-03-10T12:40:58.939 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm06.local' 2026-03-10T12:40:58.940 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T12:40:59.000 DEBUG:teuthology.orchestra.run.vm06:> true 2026-03-10T12:40:59.205 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm06.local' 2026-03-10T12:40:59.205 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm09.local' 2026-03-10T12:40:59.205 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T12:40:59.267 DEBUG:teuthology.orchestra.run.vm09:> true 2026-03-10T12:40:59.472 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm09.local' 2026-03-10T12:40:59.472 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T12:40:59.475 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T12:40:59.475 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T12:40:59.475 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T12:40:59.476 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T12:40:59.476 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: Command line: ntpd -gq 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: ---------------------------------------------------- 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: corporation. Support and training for ntp-4 are 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: available at https://www.nwtime.org/support 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: ---------------------------------------------------- 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: proto: precision = 0.029 usec (-25) 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: basedate set to 2022-02-04 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: gps base set to 2022-02-06 (week 2196) 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stderr:10 Mar 12:40:59 ntpd[16103]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: Listen normally on 3 ens3 192.168.123.106:123 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: Listen normally on 4 lo [::1]:123 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:6%2]:123 2026-03-10T12:40:59.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:40:59 ntpd[16103]: Listening on routing socket on fd #22 for interface updates 2026-03-10T12:40:59.527 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T12:40:59.527 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: Command line: ntpd -gq 2026-03-10T12:40:59.527 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: ---------------------------------------------------- 2026-03-10T12:40:59.527 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T12:40:59.527 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: corporation. Support and training for ntp-4 are 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: available at https://www.nwtime.org/support 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: ---------------------------------------------------- 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: proto: precision = 0.029 usec (-25) 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: basedate set to 2022-02-04 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: gps base set to 2022-02-06 (week 2196) 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: Listen normally on 3 ens3 192.168.123.109:123 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: Listen normally on 4 lo [::1]:123 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:9%2]:123 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:40:59 ntpd[16110]: Listening on routing socket on fd #22 for interface updates 2026-03-10T12:40:59.528 INFO:teuthology.orchestra.run.vm09.stderr:10 Mar 12:40:59 ntpd[16110]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T12:41:00.491 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:00 ntpd[16103]: Soliciting pool server 159.195.55.239 2026-03-10T12:41:00.527 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:00 ntpd[16110]: Soliciting pool server 159.195.55.239 2026-03-10T12:41:01.489 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:01 ntpd[16103]: Soliciting pool server 185.252.140.126 2026-03-10T12:41:01.490 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:01 ntpd[16103]: Soliciting pool server 158.180.28.150 2026-03-10T12:41:01.526 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:01 ntpd[16110]: Soliciting pool server 185.252.140.126 2026-03-10T12:41:01.527 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:01 ntpd[16110]: Soliciting pool server 158.180.28.150 2026-03-10T12:41:02.489 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:02 ntpd[16103]: Soliciting pool server 195.201.107.151 2026-03-10T12:41:02.489 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:02 ntpd[16103]: Soliciting pool server 5.75.181.179 2026-03-10T12:41:02.525 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:02 ntpd[16110]: Soliciting pool server 195.201.107.151 2026-03-10T12:41:02.526 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:02 ntpd[16110]: Soliciting pool server 5.75.181.179 2026-03-10T12:41:02.567 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:02 ntpd[16110]: Soliciting pool server 131.188.3.220 2026-03-10T12:41:02.568 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:02 ntpd[16103]: Soliciting pool server 131.188.3.220 2026-03-10T12:41:03.488 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:03 ntpd[16103]: Soliciting pool server 46.38.244.94 2026-03-10T12:41:03.489 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:03 ntpd[16103]: Soliciting pool server 90.187.112.137 2026-03-10T12:41:03.489 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:03 ntpd[16103]: Soliciting pool server 193.141.27.6 2026-03-10T12:41:03.489 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:03 ntpd[16103]: Soliciting pool server 134.60.111.110 2026-03-10T12:41:03.525 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:03 ntpd[16110]: Soliciting pool server 46.38.244.94 2026-03-10T12:41:03.525 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:03 ntpd[16110]: Soliciting pool server 90.187.112.137 2026-03-10T12:41:03.525 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:03 ntpd[16110]: Soliciting pool server 193.141.27.6 2026-03-10T12:41:03.525 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:03 ntpd[16110]: Soliciting pool server 134.60.111.110 2026-03-10T12:41:04.488 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:04 ntpd[16103]: Soliciting pool server 185.232.69.65 2026-03-10T12:41:04.488 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:04 ntpd[16103]: Soliciting pool server 144.76.66.156 2026-03-10T12:41:04.488 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:04 ntpd[16103]: Soliciting pool server 51.75.67.47 2026-03-10T12:41:04.488 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:04 ntpd[16103]: Soliciting pool server 185.125.190.57 2026-03-10T12:41:04.525 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:04 ntpd[16110]: Soliciting pool server 185.232.69.65 2026-03-10T12:41:04.525 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:04 ntpd[16110]: Soliciting pool server 144.76.66.156 2026-03-10T12:41:04.525 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:04 ntpd[16110]: Soliciting pool server 51.75.67.47 2026-03-10T12:41:04.525 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:04 ntpd[16110]: Soliciting pool server 185.125.190.57 2026-03-10T12:41:05.488 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:05 ntpd[16103]: Soliciting pool server 185.125.190.56 2026-03-10T12:41:05.488 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:05 ntpd[16103]: Soliciting pool server 139.162.187.236 2026-03-10T12:41:05.488 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:05 ntpd[16103]: Soliciting pool server 176.9.157.155 2026-03-10T12:41:05.524 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:05 ntpd[16110]: Soliciting pool server 185.125.190.56 2026-03-10T12:41:05.524 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:05 ntpd[16110]: Soliciting pool server 139.162.187.236 2026-03-10T12:41:05.524 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:05 ntpd[16110]: Soliciting pool server 176.9.157.155 2026-03-10T12:41:07.525 INFO:teuthology.orchestra.run.vm06.stdout:10 Mar 12:41:07 ntpd[16103]: ntpd: time slew +0.008807 s 2026-03-10T12:41:07.526 INFO:teuthology.orchestra.run.vm06.stdout:ntpd: time slew +0.008807s 2026-03-10T12:41:07.550 INFO:teuthology.orchestra.run.vm06.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T12:41:07.550 INFO:teuthology.orchestra.run.vm06.stdout:============================================================================== 2026-03-10T12:41:07.550 INFO:teuthology.orchestra.run.vm06.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:41:07.550 INFO:teuthology.orchestra.run.vm06.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:41:07.550 INFO:teuthology.orchestra.run.vm06.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:41:07.550 INFO:teuthology.orchestra.run.vm06.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:41:07.550 INFO:teuthology.orchestra.run.vm06.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:41:07.562 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 12:41:07 ntpd[16110]: ntpd: time slew -0.004932 s 2026-03-10T12:41:07.562 INFO:teuthology.orchestra.run.vm09.stdout:ntpd: time slew -0.004932s 2026-03-10T12:41:07.586 INFO:teuthology.orchestra.run.vm09.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T12:41:07.586 INFO:teuthology.orchestra.run.vm09.stdout:============================================================================== 2026-03-10T12:41:07.586 INFO:teuthology.orchestra.run.vm09.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:41:07.586 INFO:teuthology.orchestra.run.vm09.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:41:07.586 INFO:teuthology.orchestra.run.vm09.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:41:07.587 INFO:teuthology.orchestra.run.vm09.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:41:07.587 INFO:teuthology.orchestra.run.vm09.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:41:07.587 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T12:41:07.634 INFO:tasks.cephadm:Config: {'roleless': True, 'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_DAEMON_PLACE_FAIL', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T12:41:07.634 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T12:41:07.634 INFO:tasks.cephadm:Cluster fsid is 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:41:07.634 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T12:41:07.634 INFO:tasks.cephadm:No mon roles; fabricating mons 2026-03-10T12:41:07.634 INFO:tasks.cephadm:Monitor IPs: {'mon.vm06': '192.168.123.106', 'mon.vm09': '192.168.123.109'} 2026-03-10T12:41:07.634 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T12:41:07.634 DEBUG:teuthology.orchestra.run.vm06:> sudo hostname $(hostname -s) 2026-03-10T12:41:07.643 DEBUG:teuthology.orchestra.run.vm09:> sudo hostname $(hostname -s) 2026-03-10T12:41:07.651 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T12:41:07.652 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T12:41:08.328 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T12:41:08.996 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T12:41:08.997 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-10T12:41:08.997 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-10T12:41:08.997 DEBUG:teuthology.orchestra.run.vm06:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T12:41:10.321 INFO:teuthology.orchestra.run.vm06.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 12:41 /home/ubuntu/cephtest/cephadm 2026-03-10T12:41:10.321 DEBUG:teuthology.orchestra.run.vm09:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T12:41:11.577 INFO:teuthology.orchestra.run.vm09.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 12:41 /home/ubuntu/cephtest/cephadm 2026-03-10T12:41:11.577 DEBUG:teuthology.orchestra.run.vm06:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T12:41:11.581 DEBUG:teuthology.orchestra.run.vm09:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T12:41:11.589 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T12:41:11.589 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T12:41:11.624 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T12:41:11.722 INFO:teuthology.orchestra.run.vm09.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T12:41:11.726 INFO:teuthology.orchestra.run.vm06.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T12:42:26.579 INFO:teuthology.orchestra.run.vm09.stdout:{ 2026-03-10T12:42:26.580 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T12:42:26.580 INFO:teuthology.orchestra.run.vm09.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T12:42:26.580 INFO:teuthology.orchestra.run.vm09.stdout: "repo_digests": [ 2026-03-10T12:42:26.580 INFO:teuthology.orchestra.run.vm09.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T12:42:26.580 INFO:teuthology.orchestra.run.vm09.stdout: ] 2026-03-10T12:42:26.580 INFO:teuthology.orchestra.run.vm09.stdout:} 2026-03-10T12:42:26.641 INFO:teuthology.orchestra.run.vm06.stdout:{ 2026-03-10T12:42:26.641 INFO:teuthology.orchestra.run.vm06.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T12:42:26.641 INFO:teuthology.orchestra.run.vm06.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T12:42:26.642 INFO:teuthology.orchestra.run.vm06.stdout: "repo_digests": [ 2026-03-10T12:42:26.642 INFO:teuthology.orchestra.run.vm06.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T12:42:26.642 INFO:teuthology.orchestra.run.vm06.stdout: ] 2026-03-10T12:42:26.642 INFO:teuthology.orchestra.run.vm06.stdout:} 2026-03-10T12:42:26.654 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /etc/ceph 2026-03-10T12:42:26.663 DEBUG:teuthology.orchestra.run.vm09:> sudo mkdir -p /etc/ceph 2026-03-10T12:42:26.671 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod 777 /etc/ceph 2026-03-10T12:42:26.713 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 777 /etc/ceph 2026-03-10T12:42:26.721 INFO:tasks.cephadm:Writing seed config... 2026-03-10T12:42:26.721 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T12:42:26.721 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T12:42:26.721 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T12:42:26.721 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T12:42:26.721 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T12:42:26.721 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T12:42:26.721 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T12:42:26.721 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T12:42:26.721 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-10T12:42:26.721 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T12:42:26.721 DEBUG:teuthology.orchestra.run.vm06:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T12:42:26.760 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 68e2be40-1c7e-11f1-b779-df2955349a39 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T12:42:26.760 DEBUG:teuthology.orchestra.run.vm06:mon.vm06> sudo journalctl -f -n 0 -u ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mon.vm06.service 2026-03-10T12:42:26.802 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T12:42:26.802 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 192.168.123.106 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:42:26.943 INFO:teuthology.orchestra.run.vm06.stdout:-------------------------------------------------------------------------------- 2026-03-10T12:42:26.943 INFO:teuthology.orchestra.run.vm06.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '68e2be40-1c7e-11f1-b779-df2955349a39', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-ip', '192.168.123.106', '--skip-admin-label'] 2026-03-10T12:42:26.943 INFO:teuthology.orchestra.run.vm06.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T12:42:26.943 INFO:teuthology.orchestra.run.vm06.stdout:Verifying podman|docker is present... 2026-03-10T12:42:26.943 INFO:teuthology.orchestra.run.vm06.stdout:Verifying lvm2 is present... 2026-03-10T12:42:26.943 INFO:teuthology.orchestra.run.vm06.stdout:Verifying time synchronization is in place... 2026-03-10T12:42:26.946 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T12:42:26.946 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T12:42:26.948 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T12:42:26.948 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-10T12:42:26.951 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T12:42:26.951 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T12:42:26.953 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T12:42:26.953 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-10T12:42:26.955 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T12:42:26.955 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout masked 2026-03-10T12:42:26.957 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T12:42:26.957 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-10T12:42:26.959 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T12:42:26.959 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T12:42:26.962 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T12:42:26.962 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-10T12:42:26.965 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout enabled 2026-03-10T12:42:26.968 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout active 2026-03-10T12:42:26.968 INFO:teuthology.orchestra.run.vm06.stdout:Unit ntp.service is enabled and running 2026-03-10T12:42:26.968 INFO:teuthology.orchestra.run.vm06.stdout:Repeating the final host check... 2026-03-10T12:42:26.968 INFO:teuthology.orchestra.run.vm06.stdout:docker (/usr/bin/docker) is present 2026-03-10T12:42:26.968 INFO:teuthology.orchestra.run.vm06.stdout:systemctl is present 2026-03-10T12:42:26.968 INFO:teuthology.orchestra.run.vm06.stdout:lvcreate is present 2026-03-10T12:42:26.970 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T12:42:26.971 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T12:42:26.973 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T12:42:26.973 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-10T12:42:26.975 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T12:42:26.975 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T12:42:26.977 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T12:42:26.977 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-10T12:42:26.980 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T12:42:26.980 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout masked 2026-03-10T12:42:26.982 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T12:42:26.982 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-10T12:42:26.984 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T12:42:26.985 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T12:42:26.987 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T12:42:26.988 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-10T12:42:26.990 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout enabled 2026-03-10T12:42:26.993 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout active 2026-03-10T12:42:26.993 INFO:teuthology.orchestra.run.vm06.stdout:Unit ntp.service is enabled and running 2026-03-10T12:42:26.993 INFO:teuthology.orchestra.run.vm06.stdout:Host looks OK 2026-03-10T12:42:26.993 INFO:teuthology.orchestra.run.vm06.stdout:Cluster fsid: 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:26.993 INFO:teuthology.orchestra.run.vm06.stdout:Acquiring lock 139666781341584 on /run/cephadm/68e2be40-1c7e-11f1-b779-df2955349a39.lock 2026-03-10T12:42:26.993 INFO:teuthology.orchestra.run.vm06.stdout:Lock 139666781341584 acquired on /run/cephadm/68e2be40-1c7e-11f1-b779-df2955349a39.lock 2026-03-10T12:42:26.993 INFO:teuthology.orchestra.run.vm06.stdout:Verifying IP 192.168.123.106 port 3300 ... 2026-03-10T12:42:26.993 INFO:teuthology.orchestra.run.vm06.stdout:Verifying IP 192.168.123.106 port 6789 ... 2026-03-10T12:42:26.993 INFO:teuthology.orchestra.run.vm06.stdout:Base mon IP(s) is [192.168.123.106:3300, 192.168.123.106:6789], mon addrv is [v2:192.168.123.106:3300,v1:192.168.123.106:6789] 2026-03-10T12:42:26.995 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.106 metric 100 2026-03-10T12:42:26.995 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-10T12:42:26.995 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.106 metric 100 2026-03-10T12:42:26.995 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.106 metric 100 2026-03-10T12:42:26.996 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T12:42:26.996 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-10T12:42:26.997 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T12:42:26.997 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T12:42:26.997 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T12:42:26.997 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-10T12:42:26.997 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:6/64 scope link 2026-03-10T12:42:26.997 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T12:42:26.997 INFO:teuthology.orchestra.run.vm06.stdout:Mon IP `192.168.123.106` is in CIDR network `192.168.123.0/24` 2026-03-10T12:42:26.997 INFO:teuthology.orchestra.run.vm06.stdout:Mon IP `192.168.123.106` is in CIDR network `192.168.123.0/24` 2026-03-10T12:42:26.997 INFO:teuthology.orchestra.run.vm06.stdout:Mon IP `192.168.123.106` is in CIDR network `192.168.123.1/32` 2026-03-10T12:42:26.997 INFO:teuthology.orchestra.run.vm06.stdout:Mon IP `192.168.123.106` is in CIDR network `192.168.123.1/32` 2026-03-10T12:42:26.997 INFO:teuthology.orchestra.run.vm06.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-10T12:42:26.998 INFO:teuthology.orchestra.run.vm06.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T12:42:26.998 INFO:teuthology.orchestra.run.vm06.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T12:42:28.048 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-10T12:42:28.048 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T12:42:28.048 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T12:42:28.048 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T12:42:28.216 INFO:teuthology.orchestra.run.vm06.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T12:42:28.216 INFO:teuthology.orchestra.run.vm06.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T12:42:28.216 INFO:teuthology.orchestra.run.vm06.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T12:42:28.311 INFO:teuthology.orchestra.run.vm06.stdout:stat: stdout 167 167 2026-03-10T12:42:28.312 INFO:teuthology.orchestra.run.vm06.stdout:Creating initial keys... 2026-03-10T12:42:28.421 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-authtool: stdout AQC0EbBpS1scFxAAU5lQzBWc9c2NFGdXz/d1Hg== 2026-03-10T12:42:28.536 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-authtool: stdout AQC0EbBp/lxjHhAAEK9+ZOtJib+1jJbLD/ogqw== 2026-03-10T12:42:28.657 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-authtool: stdout AQC0EbBpJQdSJRAAqBLf6lTrsxWlfU3aK37FdA== 2026-03-10T12:42:28.658 INFO:teuthology.orchestra.run.vm06.stdout:Creating initial monmap... 2026-03-10T12:42:28.780 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T12:42:28.780 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T12:42:28.780 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:28.780 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T12:42:28.780 INFO:teuthology.orchestra.run.vm06.stdout:monmaptool for vm06 [v2:192.168.123.106:3300,v1:192.168.123.106:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T12:42:28.780 INFO:teuthology.orchestra.run.vm06.stdout:setting min_mon_release = quincy 2026-03-10T12:42:28.780 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/monmaptool: set fsid to 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:28.780 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T12:42:28.780 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:42:28.780 INFO:teuthology.orchestra.run.vm06.stdout:Creating mon... 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 1 imported monmap: 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr fsid 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-10T12:42:28.753887+0000 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr created 2026-03-10T12:42:28.753887+0000 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.vm06 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 0 /usr/bin/ceph-mon: set fsid to 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Git sha 0 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: DB SUMMARY 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: DB Session ID: 4IXGD332NOLA2EPMBLBG 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-vm06/store.db dir, Total Num: 0, files: 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm06/store.db: 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.create_if_missing: 1 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T12:42:28.917 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.env: 0x558ffcf8fdc0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.info_log: 0x559000336e60 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.db_log_dir: 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.wal_dir: 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.write_buffer_manager: 0x55900032d5e0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.row_cache: None 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.wal_filter: None 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T12:42:28.918 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Compression algorithms supported: 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: kZSTD supported: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.861+0000 7f7020ca8d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm06/store.db/MANIFEST-000001 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.merge_operator: 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559000329580) 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x55900034f350 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-10T12:42:28.919 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.num_levels: 7 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T12:42:28.920 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.865+0000 7f7020ca8d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.869+0000 7f7020ca8d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm06/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.869+0000 7f7020ca8d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.869+0000 7f7020ca8d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4be74182-5f0b-447b-9c97-5a466c7db0ed 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.869+0000 7f7020ca8d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.869+0000 7f7020ca8d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559000350e00 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.869+0000 7f7020ca8d80 4 rocksdb: DB pointer 0x559000434000 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.873+0000 7f7018432640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.873+0000 7f7018432640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T12:42:28.921 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x55900034f350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1e-05 secs_since: 0 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.873+0000 7f7020ca8d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.873+0000 7f7020ca8d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T12:42:28.873+0000 7f7020ca8d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-vm06 for mon.vm06 2026-03-10T12:42:28.922 INFO:teuthology.orchestra.run.vm06.stdout:create mon.vm06 on 2026-03-10T12:42:29.262 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T12:42:29.456 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-68e2be40-1c7e-11f1-b779-df2955349a39.target → /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39.target. 2026-03-10T12:42:29.456 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-68e2be40-1c7e-11f1-b779-df2955349a39.target → /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39.target. 2026-03-10T12:42:29.671 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mon.vm06 2026-03-10T12:42:29.671 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to reset failed state of unit ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mon.vm06.service: Unit ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mon.vm06.service not loaded. 2026-03-10T12:42:29.850 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39.target.wants/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mon.vm06.service → /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service. 2026-03-10T12:42:29.860 INFO:teuthology.orchestra.run.vm06.stdout:firewalld does not appear to be present 2026-03-10T12:42:29.860 INFO:teuthology.orchestra.run.vm06.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T12:42:29.861 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for mon to start... 2026-03-10T12:42:29.861 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for mon... 2026-03-10T12:42:30.081 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17020]: cluster 2026-03-10T12:42:30.009096+0000 mon.vm06 (mon.0) 1 : cluster [INF] mon.vm06 is new leader, mons vm06 in quorum (ranks 0) 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout id: 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout services: 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum vm06 (age 0.0603767s) 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout data: 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:mon is available 2026-03-10T12:42:30.117 INFO:teuthology.orchestra.run.vm06.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout fsid = 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.106:3300,v1:192.168.123.106:6789] 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T12:42:30.328 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T12:42:30.329 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T12:42:30.329 INFO:teuthology.orchestra.run.vm06.stdout:Generating new minimal ceph.conf... 2026-03-10T12:42:30.528 INFO:teuthology.orchestra.run.vm06.stdout:Restarting the monitor... 2026-03-10T12:42:30.632 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 systemd[1]: Stopping Ceph mon.vm06 for 68e2be40-1c7e-11f1-b779-df2955349a39... 2026-03-10T12:42:30.632 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17020]: debug 2026-03-10T12:42:30.569+0000 7fdff73ac640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.vm06 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T12:42:30.632 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17020]: debug 2026-03-10T12:42:30.569+0000 7fdff73ac640 -1 mon.vm06@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T12:42:30.633 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17411]: ceph-68e2be40-1c7e-11f1-b779-df2955349a39-mon-vm06 2026-03-10T12:42:30.654 INFO:teuthology.orchestra.run.vm06.stdout:Setting public_network to 192.168.123.0/24,192.168.123.1/32 in mon config section 2026-03-10T12:42:30.915 INFO:teuthology.orchestra.run.vm06.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T12:42:30.916 INFO:teuthology.orchestra.run.vm06.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:42:30.916 INFO:teuthology.orchestra.run.vm06.stdout:Creating mgr... 2026-03-10T12:42:30.916 INFO:teuthology.orchestra.run.vm06.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T12:42:30.917 INFO:teuthology.orchestra.run.vm06.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T12:42:30.917 INFO:teuthology.orchestra.run.vm06.stdout:Verifying port 0.0.0.0:8443 ... 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 systemd[1]: ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mon.vm06.service: Deactivated successfully. 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 systemd[1]: Stopped Ceph mon.vm06 for 68e2be40-1c7e-11f1-b779-df2955349a39. 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 systemd[1]: Started Ceph mon.vm06 for 68e2be40-1c7e-11f1-b779-df2955349a39. 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 0 load: jerasure load: lrc 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Git sha 0 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: DB SUMMARY 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: DB Session ID: WLRV11K1MUY5LVUAFV6O 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-vm06/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm06/store.db: 000009.log size: 75071 ; 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T12:42:30.932 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.env: 0x55badb8ccdc0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.info_log: 0x55bb0fa08b20 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.db_log_dir: 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.wal_dir: 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.write_buffer_manager: 0x55bb0fa0d900 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T12:42:30.933 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.row_cache: None 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.wal_filter: None 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Compression algorithms supported: 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: kZSTD supported: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T12:42:30.934 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm06/store.db/MANIFEST-000010 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.merge_operator: 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bb0fa086e0) 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cache_index_and_filter_blocks: 1 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: pin_top_level_index_and_filter: 1 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: index_type: 0 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: data_block_index_type: 0 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: index_shortening: 1 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: checksum: 4 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: no_block_cache: 0 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: block_cache: 0x55bb0fa2f350 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: block_cache_name: BinnedLRUCache 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: block_cache_options: 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: capacity : 536870912 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: num_shard_bits : 4 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: strict_capacity_limit : 0 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: high_pri_pool_ratio: 0.000 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: block_cache_compressed: (nil) 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: persistent_cache: (nil) 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: block_size: 4096 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: block_size_deviation: 10 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: block_restart_interval: 16 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: index_block_restart_interval: 1 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: metadata_block_size: 4096 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: partition_filters: 0 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: use_delta_encoding: 1 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: filter_policy: bloomfilter 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: whole_key_filtering: 1 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: verify_compression: 0 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: read_amp_bytes_per_bit: 0 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: format_version: 5 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: enable_index_compression: 1 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: block_align: 0 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: max_auto_readahead_size: 262144 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: prepopulate_block_cache: 0 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: initial_auto_readahead_size: 8192 2026-03-10T12:42:30.935 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: num_file_reads_for_auto_readahead: 2 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.num_levels: 7 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T12:42:30.936 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.777+0000 7f33dd0d9d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.781+0000 7f33dd0d9d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm06/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T12:42:30.937 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.781+0000 7f33dd0d9d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.781+0000 7f33dd0d9d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4be74182-5f0b-447b-9c97-5a466c7db0ed 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.781+0000 7f33dd0d9d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773146550782931, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.781+0000 7f33dd0d9d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.781+0000 7f33dd0d9d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773146550785235, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72139, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 223, "table_properties": {"data_size": 70418, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9562, "raw_average_key_size": 49, "raw_value_size": 65043, "raw_average_value_size": 335, "num_data_blocks": 8, "num_entries": 194, "num_filter_entries": 194, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773146550, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4be74182-5f0b-447b-9c97-5a466c7db0ed", "db_session_id": "WLRV11K1MUY5LVUAFV6O", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.781+0000 7f33dd0d9d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773146550785294, "job": 1, "event": "recovery_finished"} 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.781+0000 7f33dd0d9d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.785+0000 7f33dd0d9d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm06/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.785+0000 7f33dd0d9d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bb0fa30e00 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.785+0000 7f33dd0d9d80 4 rocksdb: DB pointer 0x55bb0fb4a000 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.785+0000 7f33dd0d9d80 0 starting mon.vm06 rank 0 at public addrs [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] at bind addrs [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon_data /var/lib/ceph/mon/ceph-vm06 fsid 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.789+0000 7f33dd0d9d80 1 mon.vm06@-1(???) e1 preinit fsid 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.789+0000 7f33dd0d9d80 0 mon.vm06@-1(???).mds e1 new map 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.789+0000 7f33dd0d9d80 0 mon.vm06@-1(???).mds e1 print_map 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: e1 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: btime 2026-03-10T12:42:30:013516+0000 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: legacy client fscid: -1 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: No filesystems configured 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.789+0000 7f33dd0d9d80 0 mon.vm06@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.789+0000 7f33dd0d9d80 0 mon.vm06@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.789+0000 7f33dd0d9d80 0 mon.vm06@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.789+0000 7f33dd0d9d80 0 mon.vm06@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: debug 2026-03-10T12:42:30.789+0000 7f33dd0d9d80 1 mon.vm06@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796361+0000 mon.vm06 (mon.0) 1 : cluster [INF] mon.vm06 is new leader, mons vm06 in quorum (ranks 0) 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796361+0000 mon.vm06 (mon.0) 1 : cluster [INF] mon.vm06 is new leader, mons vm06 in quorum (ranks 0) 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796454+0000 mon.vm06 (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796454+0000 mon.vm06 (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796458+0000 mon.vm06 (mon.0) 3 : cluster [DBG] fsid 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796458+0000 mon.vm06 (mon.0) 3 : cluster [DBG] fsid 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796461+0000 mon.vm06 (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T12:42:28.753887+0000 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796461+0000 mon.vm06 (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T12:42:28.753887+0000 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796467+0000 mon.vm06 (mon.0) 5 : cluster [DBG] created 2026-03-10T12:42:28.753887+0000 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796467+0000 mon.vm06 (mon.0) 5 : cluster [DBG] created 2026-03-10T12:42:28.753887+0000 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796470+0000 mon.vm06 (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796470+0000 mon.vm06 (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796473+0000 mon.vm06 (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796473+0000 mon.vm06 (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796476+0000 mon.vm06 (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.vm06 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796476+0000 mon.vm06 (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.vm06 2026-03-10T12:42:30.938 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796792+0000 mon.vm06 (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T12:42:30.939 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796792+0000 mon.vm06 (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T12:42:30.939 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796803+0000 mon.vm06 (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T12:42:30.939 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.796803+0000 mon.vm06 (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T12:42:30.939 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.797429+0000 mon.vm06 (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T12:42:30.939 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:30 vm06 bash[17497]: cluster 2026-03-10T12:42:30.797429+0000 mon.vm06 (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T12:42:31.088 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mgr.vm06.cofomf 2026-03-10T12:42:31.089 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to reset failed state of unit ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mgr.vm06.cofomf.service: Unit ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mgr.vm06.cofomf.service not loaded. 2026-03-10T12:42:31.214 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:31 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:42:31.268 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39.target.wants/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mgr.vm06.cofomf.service → /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service. 2026-03-10T12:42:31.277 INFO:teuthology.orchestra.run.vm06.stdout:firewalld does not appear to be present 2026-03-10T12:42:31.277 INFO:teuthology.orchestra.run.vm06.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T12:42:31.277 INFO:teuthology.orchestra.run.vm06.stdout:firewalld does not appear to be present 2026-03-10T12:42:31.277 INFO:teuthology.orchestra.run.vm06.stdout:Not possible to open ports <[9283, 8765, 8443]>. firewalld.service is not available 2026-03-10T12:42:31.277 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for mgr to start... 2026-03-10T12:42:31.277 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for mgr... 2026-03-10T12:42:31.487 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:31 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "fsid": "68e2be40-1c7e-11f1-b779-df2955349a39", 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 0 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "vm06" 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T12:42:30:013516+0000", 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T12:42:31.529 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T12:42:30.014106+0000", 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:42:31.530 INFO:teuthology.orchestra.run.vm06.stdout:mgr not available, waiting (1/15)... 2026-03-10T12:42:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:31 vm06 bash[17497]: audit 2026-03-10T12:42:30.874455+0000 mon.vm06 (mon.0) 12 : audit [INF] from='client.? 192.168.123.106:0/1032232912' entity='client.admin' 2026-03-10T12:42:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:31 vm06 bash[17497]: audit 2026-03-10T12:42:30.874455+0000 mon.vm06 (mon.0) 12 : audit [INF] from='client.? 192.168.123.106:0/1032232912' entity='client.admin' 2026-03-10T12:42:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:31 vm06 bash[17497]: audit 2026-03-10T12:42:31.482351+0000 mon.vm06 (mon.0) 13 : audit [DBG] from='client.? 192.168.123.106:0/4266695457' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T12:42:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:31 vm06 bash[17497]: audit 2026-03-10T12:42:31.482351+0000 mon.vm06 (mon.0) 13 : audit [DBG] from='client.? 192.168.123.106:0/4266695457' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T12:42:33.780 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-10T12:42:33.780 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "fsid": "68e2be40-1c7e-11f1-b779-df2955349a39", 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 0 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "vm06" 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum_age": 2, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T12:42:30:013516+0000", 2026-03-10T12:42:33.781 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T12:42:30.014106+0000", 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:42:33.782 INFO:teuthology.orchestra.run.vm06.stdout:mgr not available, waiting (2/15)... 2026-03-10T12:42:34.085 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:33 vm06 bash[17497]: audit 2026-03-10T12:42:33.721932+0000 mon.vm06 (mon.0) 14 : audit [DBG] from='client.? 192.168.123.106:0/2220561944' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T12:42:34.085 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:33 vm06 bash[17497]: audit 2026-03-10T12:42:33.721932+0000 mon.vm06 (mon.0) 14 : audit [DBG] from='client.? 192.168.123.106:0/2220561944' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: cluster 2026-03-10T12:42:34.752217+0000 mon.vm06 (mon.0) 15 : cluster [INF] Activating manager daemon vm06.cofomf 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: cluster 2026-03-10T12:42:34.752217+0000 mon.vm06 (mon.0) 15 : cluster [INF] Activating manager daemon vm06.cofomf 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: cluster 2026-03-10T12:42:34.764085+0000 mon.vm06 (mon.0) 16 : cluster [DBG] mgrmap e2: vm06.cofomf(active, starting, since 0.0119505s) 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: cluster 2026-03-10T12:42:34.764085+0000 mon.vm06 (mon.0) 16 : cluster [DBG] mgrmap e2: vm06.cofomf(active, starting, since 0.0119505s) 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: audit 2026-03-10T12:42:34.767040+0000 mon.vm06 (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: audit 2026-03-10T12:42:34.767040+0000 mon.vm06 (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: audit 2026-03-10T12:42:34.767411+0000 mon.vm06 (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: audit 2026-03-10T12:42:34.767411+0000 mon.vm06 (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: audit 2026-03-10T12:42:34.767789+0000 mon.vm06 (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: audit 2026-03-10T12:42:34.767789+0000 mon.vm06 (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: audit 2026-03-10T12:42:34.768146+0000 mon.vm06 (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: audit 2026-03-10T12:42:34.768146+0000 mon.vm06 (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: audit 2026-03-10T12:42:34.768478+0000 mon.vm06 (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr metadata", "who": "vm06.cofomf", "id": "vm06.cofomf"}]: dispatch 2026-03-10T12:42:35.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:34 vm06 bash[17497]: audit 2026-03-10T12:42:34.768478+0000 mon.vm06 (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr metadata", "who": "vm06.cofomf", "id": "vm06.cofomf"}]: dispatch 2026-03-10T12:42:36.052 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:35 vm06 bash[17497]: cluster 2026-03-10T12:42:34.779989+0000 mon.vm06 (mon.0) 22 : cluster [INF] Manager daemon vm06.cofomf is now available 2026-03-10T12:42:36.052 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:35 vm06 bash[17497]: cluster 2026-03-10T12:42:34.779989+0000 mon.vm06 (mon.0) 22 : cluster [INF] Manager daemon vm06.cofomf is now available 2026-03-10T12:42:36.052 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:35 vm06 bash[17497]: audit 2026-03-10T12:42:34.788665+0000 mon.vm06 (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' 2026-03-10T12:42:36.052 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:35 vm06 bash[17497]: audit 2026-03-10T12:42:34.788665+0000 mon.vm06 (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' 2026-03-10T12:42:36.052 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:35 vm06 bash[17497]: audit 2026-03-10T12:42:34.789298+0000 mon.vm06 (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:42:36.052 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:35 vm06 bash[17497]: audit 2026-03-10T12:42:34.789298+0000 mon.vm06 (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:42:36.052 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:35 vm06 bash[17497]: audit 2026-03-10T12:42:34.791015+0000 mon.vm06 (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/trash_purge_schedule"}]: dispatch 2026-03-10T12:42:36.052 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:35 vm06 bash[17497]: audit 2026-03-10T12:42:34.791015+0000 mon.vm06 (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/trash_purge_schedule"}]: dispatch 2026-03-10T12:42:36.052 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:35 vm06 bash[17497]: audit 2026-03-10T12:42:34.792608+0000 mon.vm06 (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' 2026-03-10T12:42:36.052 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:35 vm06 bash[17497]: audit 2026-03-10T12:42:34.792608+0000 mon.vm06 (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' 2026-03-10T12:42:36.052 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:35 vm06 bash[17497]: audit 2026-03-10T12:42:34.797220+0000 mon.vm06 (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' 2026-03-10T12:42:36.052 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:35 vm06 bash[17497]: audit 2026-03-10T12:42:34.797220+0000 mon.vm06 (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.106:0/3542702398' entity='mgr.vm06.cofomf' 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "fsid": "68e2be40-1c7e-11f1-b779-df2955349a39", 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 0 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "vm06" 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T12:42:36.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T12:42:30:013516+0000", 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T12:42:30.014106+0000", 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:42:36.092 INFO:teuthology.orchestra.run.vm06.stdout:mgr is available 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout fsid = 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.106:3300,v1:192.168.123.106:6789] 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T12:42:36.368 INFO:teuthology.orchestra.run.vm06.stdout:Enabling cephadm module... 2026-03-10T12:42:37.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:36 vm06 bash[17497]: cluster 2026-03-10T12:42:35.794404+0000 mon.vm06 (mon.0) 28 : cluster [DBG] mgrmap e3: vm06.cofomf(active, since 1.04227s) 2026-03-10T12:42:37.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:36 vm06 bash[17497]: cluster 2026-03-10T12:42:35.794404+0000 mon.vm06 (mon.0) 28 : cluster [DBG] mgrmap e3: vm06.cofomf(active, since 1.04227s) 2026-03-10T12:42:37.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:36 vm06 bash[17497]: audit 2026-03-10T12:42:36.041270+0000 mon.vm06 (mon.0) 29 : audit [DBG] from='client.? 192.168.123.106:0/2953997603' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T12:42:37.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:36 vm06 bash[17497]: audit 2026-03-10T12:42:36.041270+0000 mon.vm06 (mon.0) 29 : audit [DBG] from='client.? 192.168.123.106:0/2953997603' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T12:42:37.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:36 vm06 bash[17497]: audit 2026-03-10T12:42:36.320567+0000 mon.vm06 (mon.0) 30 : audit [INF] from='client.? 192.168.123.106:0/60127556' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T12:42:37.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:36 vm06 bash[17497]: audit 2026-03-10T12:42:36.320567+0000 mon.vm06 (mon.0) 30 : audit [INF] from='client.? 192.168.123.106:0/60127556' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T12:42:37.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:36 vm06 bash[17497]: audit 2026-03-10T12:42:36.611199+0000 mon.vm06 (mon.0) 31 : audit [INF] from='client.? 192.168.123.106:0/3645360261' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T12:42:37.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:36 vm06 bash[17497]: audit 2026-03-10T12:42:36.611199+0000 mon.vm06 (mon.0) 31 : audit [INF] from='client.? 192.168.123.106:0/3645360261' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T12:42:37.249 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:42:37.249 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-10T12:42:37.249 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T12:42:37.249 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "active_name": "vm06.cofomf", 2026-03-10T12:42:37.249 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T12:42:37.249 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:42:37.249 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for the mgr to restart... 2026-03-10T12:42:37.249 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for mgr epoch 4... 2026-03-10T12:42:38.065 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:37 vm06 bash[17497]: audit 2026-03-10T12:42:36.795714+0000 mon.vm06 (mon.0) 32 : audit [INF] from='client.? 192.168.123.106:0/3645360261' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T12:42:38.065 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:37 vm06 bash[17497]: audit 2026-03-10T12:42:36.795714+0000 mon.vm06 (mon.0) 32 : audit [INF] from='client.? 192.168.123.106:0/3645360261' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T12:42:38.065 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:37 vm06 bash[17497]: cluster 2026-03-10T12:42:36.801813+0000 mon.vm06 (mon.0) 33 : cluster [DBG] mgrmap e4: vm06.cofomf(active, since 2s) 2026-03-10T12:42:38.065 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:37 vm06 bash[17497]: cluster 2026-03-10T12:42:36.801813+0000 mon.vm06 (mon.0) 33 : cluster [DBG] mgrmap e4: vm06.cofomf(active, since 2s) 2026-03-10T12:42:38.065 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:37 vm06 bash[17497]: audit 2026-03-10T12:42:37.153008+0000 mon.vm06 (mon.0) 34 : audit [DBG] from='client.? 192.168.123.106:0/132729003' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T12:42:38.065 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:37 vm06 bash[17497]: audit 2026-03-10T12:42:37.153008+0000 mon.vm06 (mon.0) 34 : audit [DBG] from='client.? 192.168.123.106:0/132729003' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: cluster 2026-03-10T12:42:40.276863+0000 mon.vm06 (mon.0) 35 : cluster [INF] Active manager daemon vm06.cofomf restarted 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: cluster 2026-03-10T12:42:40.276863+0000 mon.vm06 (mon.0) 35 : cluster [INF] Active manager daemon vm06.cofomf restarted 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: cluster 2026-03-10T12:42:40.277327+0000 mon.vm06 (mon.0) 36 : cluster [INF] Activating manager daemon vm06.cofomf 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: cluster 2026-03-10T12:42:40.277327+0000 mon.vm06 (mon.0) 36 : cluster [INF] Activating manager daemon vm06.cofomf 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: cluster 2026-03-10T12:42:40.282214+0000 mon.vm06 (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: cluster 2026-03-10T12:42:40.282214+0000 mon.vm06 (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: cluster 2026-03-10T12:42:40.282379+0000 mon.vm06 (mon.0) 38 : cluster [DBG] mgrmap e5: vm06.cofomf(active, starting, since 0.00518416s) 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: cluster 2026-03-10T12:42:40.282379+0000 mon.vm06 (mon.0) 38 : cluster [DBG] mgrmap e5: vm06.cofomf(active, starting, since 0.00518416s) 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: audit 2026-03-10T12:42:40.285987+0000 mon.vm06 (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: audit 2026-03-10T12:42:40.285987+0000 mon.vm06 (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: audit 2026-03-10T12:42:40.287121+0000 mon.vm06 (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr metadata", "who": "vm06.cofomf", "id": "vm06.cofomf"}]: dispatch 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: audit 2026-03-10T12:42:40.287121+0000 mon.vm06 (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr metadata", "who": "vm06.cofomf", "id": "vm06.cofomf"}]: dispatch 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: audit 2026-03-10T12:42:40.288058+0000 mon.vm06 (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: audit 2026-03-10T12:42:40.288058+0000 mon.vm06 (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: audit 2026-03-10T12:42:40.288387+0000 mon.vm06 (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: audit 2026-03-10T12:42:40.288387+0000 mon.vm06 (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: audit 2026-03-10T12:42:40.288694+0000 mon.vm06 (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: audit 2026-03-10T12:42:40.288694+0000 mon.vm06 (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: cluster 2026-03-10T12:42:40.295146+0000 mon.vm06 (mon.0) 44 : cluster [INF] Manager daemon vm06.cofomf is now available 2026-03-10T12:42:40.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:40 vm06 bash[17497]: cluster 2026-03-10T12:42:40.295146+0000 mon.vm06 (mon.0) 44 : cluster [INF] Manager daemon vm06.cofomf is now available 2026-03-10T12:42:41.473 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:42:41.473 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-10T12:42:41.473 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T12:42:41.473 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:42:41.473 INFO:teuthology.orchestra.run.vm06.stdout:mgr epoch 4 is available 2026-03-10T12:42:41.473 INFO:teuthology.orchestra.run.vm06.stdout:Setting orchestrator backend to cephadm... 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: cephadm 2026-03-10T12:42:40.302166+0000 mgr.vm06.cofomf (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: cephadm 2026-03-10T12:42:40.302166+0000 mgr.vm06.cofomf (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:40.362777+0000 mon.vm06 (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:40.362777+0000 mon.vm06 (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:40.377634+0000 mon.vm06 (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:40.377634+0000 mon.vm06 (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:40.391692+0000 mon.vm06 (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:40.391692+0000 mon.vm06 (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:40.392686+0000 mon.vm06 (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:40.392686+0000 mon.vm06 (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:40.405921+0000 mon.vm06 (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:40.405921+0000 mon.vm06 (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:40.411990+0000 mon.vm06 (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/trash_purge_schedule"}]: dispatch 2026-03-10T12:42:41.674 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:40.411990+0000 mon.vm06 (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/trash_purge_schedule"}]: dispatch 2026-03-10T12:42:41.675 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:41.021669+0000 mon.vm06 (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:41.675 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:41.021669+0000 mon.vm06 (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:41.675 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:41.028127+0000 mon.vm06 (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:41.675 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:41 vm06 bash[17497]: audit 2026-03-10T12:42:41.028127+0000 mon.vm06 (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:42.491 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: audit 2026-03-10T12:42:41.370838+0000 mgr.vm06.cofomf (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T12:42:42.491 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: audit 2026-03-10T12:42:41.370838+0000 mgr.vm06.cofomf (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T12:42:42.491 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: cluster 2026-03-10T12:42:41.374730+0000 mon.vm06 (mon.0) 53 : cluster [DBG] mgrmap e6: vm06.cofomf(active, since 1.09753s) 2026-03-10T12:42:42.491 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: cluster 2026-03-10T12:42:41.374730+0000 mon.vm06 (mon.0) 53 : cluster [DBG] mgrmap e6: vm06.cofomf(active, since 1.09753s) 2026-03-10T12:42:42.491 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: audit 2026-03-10T12:42:41.377409+0000 mgr.vm06.cofomf (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T12:42:42.491 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: audit 2026-03-10T12:42:41.377409+0000 mgr.vm06.cofomf (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T12:42:42.491 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: cephadm 2026-03-10T12:42:41.810238+0000 mgr.vm06.cofomf (mgr.14118) 4 : cephadm [INF] [10/Mar/2026:12:42:41] ENGINE Bus STARTING 2026-03-10T12:42:42.491 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: cephadm 2026-03-10T12:42:41.810238+0000 mgr.vm06.cofomf (mgr.14118) 4 : cephadm [INF] [10/Mar/2026:12:42:41] ENGINE Bus STARTING 2026-03-10T12:42:42.491 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: audit 2026-03-10T12:42:42.033778+0000 mon.vm06 (mon.0) 54 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:42:42.491 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: audit 2026-03-10T12:42:42.033778+0000 mon.vm06 (mon.0) 54 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:42:42.491 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: audit 2026-03-10T12:42:42.137490+0000 mon.vm06 (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:42.492 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: audit 2026-03-10T12:42:42.137490+0000 mon.vm06 (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:42.492 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: audit 2026-03-10T12:42:42.145949+0000 mon.vm06 (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:42:42.492 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:42 vm06 bash[17497]: audit 2026-03-10T12:42:42.145949+0000 mon.vm06 (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:42:42.521 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T12:42:42.521 INFO:teuthology.orchestra.run.vm06.stdout:Generating ssh key... 2026-03-10T12:42:43.096 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM6z2W/wHmk6wvUo2R8g2juiLF+UHM/ZRV1YTA9vIads ceph-68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:43.096 INFO:teuthology.orchestra.run.vm06.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T12:42:43.096 INFO:teuthology.orchestra.run.vm06.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T12:42:43.096 INFO:teuthology.orchestra.run.vm06.stdout:Adding host vm06... 2026-03-10T12:42:43.978 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: cephadm 2026-03-10T12:42:41.912523+0000 mgr.vm06.cofomf (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:12:42:41] ENGINE Serving on http://192.168.123.106:8765 2026-03-10T12:42:43.978 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: cephadm 2026-03-10T12:42:41.912523+0000 mgr.vm06.cofomf (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:12:42:41] ENGINE Serving on http://192.168.123.106:8765 2026-03-10T12:42:43.978 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: cephadm 2026-03-10T12:42:42.032651+0000 mgr.vm06.cofomf (mgr.14118) 6 : cephadm [INF] [10/Mar/2026:12:42:42] ENGINE Serving on https://192.168.123.106:7150 2026-03-10T12:42:43.978 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: cephadm 2026-03-10T12:42:42.032651+0000 mgr.vm06.cofomf (mgr.14118) 6 : cephadm [INF] [10/Mar/2026:12:42:42] ENGINE Serving on https://192.168.123.106:7150 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: cephadm 2026-03-10T12:42:42.032697+0000 mgr.vm06.cofomf (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:12:42:42] ENGINE Bus STARTED 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: cephadm 2026-03-10T12:42:42.032697+0000 mgr.vm06.cofomf (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:12:42:42] ENGINE Bus STARTED 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: cephadm 2026-03-10T12:42:42.033311+0000 mgr.vm06.cofomf (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:12:42:42] ENGINE Client ('192.168.123.106', 52026) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: cephadm 2026-03-10T12:42:42.033311+0000 mgr.vm06.cofomf (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:12:42:42] ENGINE Client ('192.168.123.106', 52026) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: audit 2026-03-10T12:42:42.093917+0000 mgr.vm06.cofomf (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: audit 2026-03-10T12:42:42.093917+0000 mgr.vm06.cofomf (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: audit 2026-03-10T12:42:42.479192+0000 mgr.vm06.cofomf (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: audit 2026-03-10T12:42:42.479192+0000 mgr.vm06.cofomf (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: audit 2026-03-10T12:42:42.757846+0000 mgr.vm06.cofomf (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: audit 2026-03-10T12:42:42.757846+0000 mgr.vm06.cofomf (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: cephadm 2026-03-10T12:42:42.758070+0000 mgr.vm06.cofomf (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: cephadm 2026-03-10T12:42:42.758070+0000 mgr.vm06.cofomf (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: audit 2026-03-10T12:42:42.776070+0000 mon.vm06 (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: audit 2026-03-10T12:42:42.776070+0000 mon.vm06 (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: audit 2026-03-10T12:42:42.778763+0000 mon.vm06 (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: audit 2026-03-10T12:42:42.778763+0000 mon.vm06 (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: cluster 2026-03-10T12:42:43.141532+0000 mon.vm06 (mon.0) 59 : cluster [DBG] mgrmap e7: vm06.cofomf(active, since 2s) 2026-03-10T12:42:43.979 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:43 vm06 bash[17497]: cluster 2026-03-10T12:42:43.141532+0000 mon.vm06 (mon.0) 59 : cluster [DBG] mgrmap e7: vm06.cofomf(active, since 2s) 2026-03-10T12:42:45.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:44 vm06 bash[17497]: audit 2026-03-10T12:42:43.057319+0000 mgr.vm06.cofomf (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:45.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:44 vm06 bash[17497]: audit 2026-03-10T12:42:43.057319+0000 mgr.vm06.cofomf (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:45.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:44 vm06 bash[17497]: audit 2026-03-10T12:42:43.340046+0000 mgr.vm06.cofomf (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm06", "addr": "192.168.123.106", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:45.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:44 vm06 bash[17497]: audit 2026-03-10T12:42:43.340046+0000 mgr.vm06.cofomf (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm06", "addr": "192.168.123.106", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:45.413 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Added host 'vm06' with addr '192.168.123.106' 2026-03-10T12:42:45.413 INFO:teuthology.orchestra.run.vm06.stdout:Deploying mon service with default placement... 2026-03-10T12:42:45.787 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T12:42:45.787 INFO:teuthology.orchestra.run.vm06.stdout:Deploying mgr service with default placement... 2026-03-10T12:42:46.034 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:45 vm06 bash[17497]: cephadm 2026-03-10T12:42:43.940795+0000 mgr.vm06.cofomf (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm06 2026-03-10T12:42:46.035 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:45 vm06 bash[17497]: cephadm 2026-03-10T12:42:43.940795+0000 mgr.vm06.cofomf (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm06 2026-03-10T12:42:46.035 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:45 vm06 bash[17497]: audit 2026-03-10T12:42:45.348766+0000 mon.vm06 (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:46.035 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:45 vm06 bash[17497]: audit 2026-03-10T12:42:45.348766+0000 mon.vm06 (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:46.035 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:45 vm06 bash[17497]: audit 2026-03-10T12:42:45.350005+0000 mon.vm06 (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:42:46.035 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:45 vm06 bash[17497]: audit 2026-03-10T12:42:45.350005+0000 mon.vm06 (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:42:46.035 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:45 vm06 bash[17497]: audit 2026-03-10T12:42:45.741378+0000 mon.vm06 (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:46.035 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:45 vm06 bash[17497]: audit 2026-03-10T12:42:45.741378+0000 mon.vm06 (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:46.069 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T12:42:46.069 INFO:teuthology.orchestra.run.vm06.stdout:Deploying crash service with default placement... 2026-03-10T12:42:46.362 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Scheduled crash update... 2026-03-10T12:42:46.362 INFO:teuthology.orchestra.run.vm06.stdout:Deploying ceph-exporter service with default placement... 2026-03-10T12:42:46.676 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Scheduled ceph-exporter update... 2026-03-10T12:42:46.676 INFO:teuthology.orchestra.run.vm06.stdout:Deploying prometheus service with default placement... 2026-03-10T12:42:47.042 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:46 vm06 bash[17497]: cephadm 2026-03-10T12:42:45.349427+0000 mgr.vm06.cofomf (mgr.14118) 16 : cephadm [INF] Added host vm06 2026-03-10T12:42:47.042 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:46 vm06 bash[17497]: cephadm 2026-03-10T12:42:45.349427+0000 mgr.vm06.cofomf (mgr.14118) 16 : cephadm [INF] Added host vm06 2026-03-10T12:42:47.042 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:46 vm06 bash[17497]: audit 2026-03-10T12:42:45.736944+0000 mgr.vm06.cofomf (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:47.042 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:46 vm06 bash[17497]: audit 2026-03-10T12:42:45.736944+0000 mgr.vm06.cofomf (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:47.042 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:46 vm06 bash[17497]: cephadm 2026-03-10T12:42:45.738063+0000 mgr.vm06.cofomf (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T12:42:47.042 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:46 vm06 bash[17497]: cephadm 2026-03-10T12:42:45.738063+0000 mgr.vm06.cofomf (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T12:42:47.042 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:46 vm06 bash[17497]: audit 2026-03-10T12:42:46.023186+0000 mon.vm06 (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:47.043 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:46 vm06 bash[17497]: audit 2026-03-10T12:42:46.023186+0000 mon.vm06 (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:47.043 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:46 vm06 bash[17497]: audit 2026-03-10T12:42:46.320416+0000 mon.vm06 (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:47.043 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:46 vm06 bash[17497]: audit 2026-03-10T12:42:46.320416+0000 mon.vm06 (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:47.043 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:46 vm06 bash[17497]: audit 2026-03-10T12:42:46.612090+0000 mon.vm06 (mon.0) 65 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:47.043 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:46 vm06 bash[17497]: audit 2026-03-10T12:42:46.612090+0000 mon.vm06 (mon.0) 65 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:47.078 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Scheduled prometheus update... 2026-03-10T12:42:47.078 INFO:teuthology.orchestra.run.vm06.stdout:Deploying grafana service with default placement... 2026-03-10T12:42:47.442 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Scheduled grafana update... 2026-03-10T12:42:47.442 INFO:teuthology.orchestra.run.vm06.stdout:Deploying node-exporter service with default placement... 2026-03-10T12:42:47.789 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Scheduled node-exporter update... 2026-03-10T12:42:47.789 INFO:teuthology.orchestra.run.vm06.stdout:Deploying alertmanager service with default placement... 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:46.019053+0000 mgr.vm06.cofomf (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:46.019053+0000 mgr.vm06.cofomf (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: cephadm 2026-03-10T12:42:46.019838+0000 mgr.vm06.cofomf (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: cephadm 2026-03-10T12:42:46.019838+0000 mgr.vm06.cofomf (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:46.316198+0000 mgr.vm06.cofomf (mgr.14118) 21 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:46.316198+0000 mgr.vm06.cofomf (mgr.14118) 21 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: cephadm 2026-03-10T12:42:46.317021+0000 mgr.vm06.cofomf (mgr.14118) 22 : cephadm [INF] Saving service crash spec with placement * 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: cephadm 2026-03-10T12:42:46.317021+0000 mgr.vm06.cofomf (mgr.14118) 22 : cephadm [INF] Saving service crash spec with placement * 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:46.607467+0000 mgr.vm06.cofomf (mgr.14118) 23 : audit [DBG] from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:46.607467+0000 mgr.vm06.cofomf (mgr.14118) 23 : audit [DBG] from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: cephadm 2026-03-10T12:42:46.608512+0000 mgr.vm06.cofomf (mgr.14118) 24 : cephadm [INF] Saving service ceph-exporter spec with placement * 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: cephadm 2026-03-10T12:42:46.608512+0000 mgr.vm06.cofomf (mgr.14118) 24 : cephadm [INF] Saving service ceph-exporter spec with placement * 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:46.895687+0000 mon.vm06 (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:46.895687+0000 mon.vm06 (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:47.018402+0000 mon.vm06 (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:47.018402+0000 mon.vm06 (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:47.264940+0000 mon.vm06 (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:47.264940+0000 mon.vm06 (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:47.370142+0000 mon.vm06 (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:47.370142+0000 mon.vm06 (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:47.744597+0000 mon.vm06 (mon.0) 70 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:48.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:47 vm06 bash[17497]: audit 2026-03-10T12:42:47.744597+0000 mon.vm06 (mon.0) 70 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:48.091 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Scheduled alertmanager update... 2026-03-10T12:42:48.662 INFO:teuthology.orchestra.run.vm06.stdout:Enabling the dashboard module... 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: audit 2026-03-10T12:42:47.014584+0000 mgr.vm06.cofomf (mgr.14118) 25 : audit [DBG] from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: audit 2026-03-10T12:42:47.014584+0000 mgr.vm06.cofomf (mgr.14118) 25 : audit [DBG] from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: cephadm 2026-03-10T12:42:47.015391+0000 mgr.vm06.cofomf (mgr.14118) 26 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: cephadm 2026-03-10T12:42:47.015391+0000 mgr.vm06.cofomf (mgr.14118) 26 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: audit 2026-03-10T12:42:47.365892+0000 mgr.vm06.cofomf (mgr.14118) 27 : audit [DBG] from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: audit 2026-03-10T12:42:47.365892+0000 mgr.vm06.cofomf (mgr.14118) 27 : audit [DBG] from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: cephadm 2026-03-10T12:42:47.366800+0000 mgr.vm06.cofomf (mgr.14118) 28 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: cephadm 2026-03-10T12:42:47.366800+0000 mgr.vm06.cofomf (mgr.14118) 28 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: audit 2026-03-10T12:42:47.741060+0000 mgr.vm06.cofomf (mgr.14118) 29 : audit [DBG] from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: audit 2026-03-10T12:42:47.741060+0000 mgr.vm06.cofomf (mgr.14118) 29 : audit [DBG] from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: cephadm 2026-03-10T12:42:47.741806+0000 mgr.vm06.cofomf (mgr.14118) 30 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: cephadm 2026-03-10T12:42:47.741806+0000 mgr.vm06.cofomf (mgr.14118) 30 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: audit 2026-03-10T12:42:48.044830+0000 mon.vm06 (mon.0) 71 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: audit 2026-03-10T12:42:48.044830+0000 mon.vm06 (mon.0) 71 : audit [INF] from='mgr.14118 192.168.123.106:0/537702941' entity='mgr.vm06.cofomf' 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: audit 2026-03-10T12:42:48.346660+0000 mon.vm06 (mon.0) 72 : audit [INF] from='client.? 192.168.123.106:0/3755573138' entity='client.admin' 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: audit 2026-03-10T12:42:48.346660+0000 mon.vm06 (mon.0) 72 : audit [INF] from='client.? 192.168.123.106:0/3755573138' entity='client.admin' 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: audit 2026-03-10T12:42:48.619121+0000 mon.vm06 (mon.0) 73 : audit [INF] from='client.? 192.168.123.106:0/1373032040' entity='client.admin' 2026-03-10T12:42:49.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:48 vm06 bash[17497]: audit 2026-03-10T12:42:48.619121+0000 mon.vm06 (mon.0) 73 : audit [INF] from='client.? 192.168.123.106:0/1373032040' entity='client.admin' 2026-03-10T12:42:50.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:49 vm06 bash[17497]: audit 2026-03-10T12:42:48.040051+0000 mgr.vm06.cofomf (mgr.14118) 31 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:50.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:49 vm06 bash[17497]: audit 2026-03-10T12:42:48.040051+0000 mgr.vm06.cofomf (mgr.14118) 31 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:50.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:49 vm06 bash[17497]: cephadm 2026-03-10T12:42:48.040745+0000 mgr.vm06.cofomf (mgr.14118) 32 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-10T12:42:50.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:49 vm06 bash[17497]: cephadm 2026-03-10T12:42:48.040745+0000 mgr.vm06.cofomf (mgr.14118) 32 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-10T12:42:50.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:49 vm06 bash[17497]: audit 2026-03-10T12:42:48.921752+0000 mon.vm06 (mon.0) 74 : audit [INF] from='client.? 192.168.123.106:0/2465575504' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T12:42:50.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:49 vm06 bash[17497]: audit 2026-03-10T12:42:48.921752+0000 mon.vm06 (mon.0) 74 : audit [INF] from='client.? 192.168.123.106:0/2465575504' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T12:42:50.239 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:42:50.239 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-10T12:42:50.239 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T12:42:50.239 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "active_name": "vm06.cofomf", 2026-03-10T12:42:50.239 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T12:42:50.239 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:42:50.239 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for the mgr to restart... 2026-03-10T12:42:50.239 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for mgr epoch 8... 2026-03-10T12:42:51.089 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:50 vm06 bash[17497]: audit 2026-03-10T12:42:49.793969+0000 mon.vm06 (mon.0) 75 : audit [INF] from='client.? 192.168.123.106:0/2465575504' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T12:42:51.090 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:50 vm06 bash[17497]: audit 2026-03-10T12:42:49.793969+0000 mon.vm06 (mon.0) 75 : audit [INF] from='client.? 192.168.123.106:0/2465575504' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T12:42:51.090 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:50 vm06 bash[17497]: cluster 2026-03-10T12:42:49.796358+0000 mon.vm06 (mon.0) 76 : cluster [DBG] mgrmap e8: vm06.cofomf(active, since 9s) 2026-03-10T12:42:51.090 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:50 vm06 bash[17497]: cluster 2026-03-10T12:42:49.796358+0000 mon.vm06 (mon.0) 76 : cluster [DBG] mgrmap e8: vm06.cofomf(active, since 9s) 2026-03-10T12:42:51.090 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:50 vm06 bash[17497]: audit 2026-03-10T12:42:50.195218+0000 mon.vm06 (mon.0) 77 : audit [DBG] from='client.? 192.168.123.106:0/639980317' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T12:42:51.090 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:50 vm06 bash[17497]: audit 2026-03-10T12:42:50.195218+0000 mon.vm06 (mon.0) 77 : audit [DBG] from='client.? 192.168.123.106:0/639980317' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: cluster 2026-03-10T12:42:53.374264+0000 mon.vm06 (mon.0) 78 : cluster [INF] Active manager daemon vm06.cofomf restarted 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: cluster 2026-03-10T12:42:53.374264+0000 mon.vm06 (mon.0) 78 : cluster [INF] Active manager daemon vm06.cofomf restarted 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: cluster 2026-03-10T12:42:53.374508+0000 mon.vm06 (mon.0) 79 : cluster [INF] Activating manager daemon vm06.cofomf 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: cluster 2026-03-10T12:42:53.374508+0000 mon.vm06 (mon.0) 79 : cluster [INF] Activating manager daemon vm06.cofomf 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: cluster 2026-03-10T12:42:53.381346+0000 mon.vm06 (mon.0) 80 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: cluster 2026-03-10T12:42:53.381346+0000 mon.vm06 (mon.0) 80 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: cluster 2026-03-10T12:42:53.381497+0000 mon.vm06 (mon.0) 81 : cluster [DBG] mgrmap e9: vm06.cofomf(active, starting, since 0.00709842s) 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: cluster 2026-03-10T12:42:53.381497+0000 mon.vm06 (mon.0) 81 : cluster [DBG] mgrmap e9: vm06.cofomf(active, starting, since 0.00709842s) 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.385224+0000 mon.vm06 (mon.0) 82 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.385224+0000 mon.vm06 (mon.0) 82 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.385472+0000 mon.vm06 (mon.0) 83 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr metadata", "who": "vm06.cofomf", "id": "vm06.cofomf"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.385472+0000 mon.vm06 (mon.0) 83 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr metadata", "who": "vm06.cofomf", "id": "vm06.cofomf"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.385714+0000 mon.vm06 (mon.0) 84 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.385714+0000 mon.vm06 (mon.0) 84 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.385908+0000 mon.vm06 (mon.0) 85 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.385908+0000 mon.vm06 (mon.0) 85 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.386005+0000 mon.vm06 (mon.0) 86 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.386005+0000 mon.vm06 (mon.0) 86 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: cluster 2026-03-10T12:42:53.393906+0000 mon.vm06 (mon.0) 87 : cluster [INF] Manager daemon vm06.cofomf is now available 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: cluster 2026-03-10T12:42:53.393906+0000 mon.vm06 (mon.0) 87 : cluster [INF] Manager daemon vm06.cofomf is now available 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.416673+0000 mon.vm06 (mon.0) 88 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.416673+0000 mon.vm06 (mon.0) 88 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:42:53.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.417055+0000 mon.vm06 (mon.0) 89 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:42:53.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.417055+0000 mon.vm06 (mon.0) 89 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:42:53.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.419666+0000 mon.vm06 (mon.0) 90 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/trash_purge_schedule"}]: dispatch 2026-03-10T12:42:53.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:53 vm06 bash[17497]: audit 2026-03-10T12:42:53.419666+0000 mon.vm06 (mon.0) 90 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/trash_purge_schedule"}]: dispatch 2026-03-10T12:42:54.449 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:42:54.449 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-10T12:42:54.449 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T12:42:54.449 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:42:54.449 INFO:teuthology.orchestra.run.vm06.stdout:mgr epoch 8 is available 2026-03-10T12:42:54.449 INFO:teuthology.orchestra.run.vm06.stdout:Generating a dashboard self-signed certificate... 2026-03-10T12:42:54.753 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T12:42:54.753 INFO:teuthology.orchestra.run.vm06.stdout:Creating initial admin user... 2026-03-10T12:42:55.184 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$muSv0CC23jAIJKb.Kzn/teeBEzfPLmKCDc6RpB2rWZzVPZZ5HCeBO", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773146575, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T12:42:55.184 INFO:teuthology.orchestra.run.vm06.stdout:Fetching dashboard port number... 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: cephadm 2026-03-10T12:42:54.076406+0000 mgr.vm06.cofomf (mgr.14162) 1 : cephadm [INF] [10/Mar/2026:12:42:54] ENGINE Bus STARTING 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: cephadm 2026-03-10T12:42:54.076406+0000 mgr.vm06.cofomf (mgr.14162) 1 : cephadm [INF] [10/Mar/2026:12:42:54] ENGINE Bus STARTING 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: cephadm 2026-03-10T12:42:54.185098+0000 mgr.vm06.cofomf (mgr.14162) 2 : cephadm [INF] [10/Mar/2026:12:42:54] ENGINE Serving on https://192.168.123.106:7150 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: cephadm 2026-03-10T12:42:54.185098+0000 mgr.vm06.cofomf (mgr.14162) 2 : cephadm [INF] [10/Mar/2026:12:42:54] ENGINE Serving on https://192.168.123.106:7150 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: cephadm 2026-03-10T12:42:54.185836+0000 mgr.vm06.cofomf (mgr.14162) 3 : cephadm [INF] [10/Mar/2026:12:42:54] ENGINE Client ('192.168.123.106', 48908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: cephadm 2026-03-10T12:42:54.185836+0000 mgr.vm06.cofomf (mgr.14162) 3 : cephadm [INF] [10/Mar/2026:12:42:54] ENGINE Client ('192.168.123.106', 48908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: cephadm 2026-03-10T12:42:54.286599+0000 mgr.vm06.cofomf (mgr.14162) 4 : cephadm [INF] [10/Mar/2026:12:42:54] ENGINE Serving on http://192.168.123.106:8765 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: cephadm 2026-03-10T12:42:54.286599+0000 mgr.vm06.cofomf (mgr.14162) 4 : cephadm [INF] [10/Mar/2026:12:42:54] ENGINE Serving on http://192.168.123.106:8765 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: cephadm 2026-03-10T12:42:54.286638+0000 mgr.vm06.cofomf (mgr.14162) 5 : cephadm [INF] [10/Mar/2026:12:42:54] ENGINE Bus STARTED 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: cephadm 2026-03-10T12:42:54.286638+0000 mgr.vm06.cofomf (mgr.14162) 5 : cephadm [INF] [10/Mar/2026:12:42:54] ENGINE Bus STARTED 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: audit 2026-03-10T12:42:54.401029+0000 mgr.vm06.cofomf (mgr.14162) 6 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: audit 2026-03-10T12:42:54.401029+0000 mgr.vm06.cofomf (mgr.14162) 6 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: cluster 2026-03-10T12:42:54.403987+0000 mon.vm06 (mon.0) 91 : cluster [DBG] mgrmap e10: vm06.cofomf(active, since 1.02958s) 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: cluster 2026-03-10T12:42:54.403987+0000 mon.vm06 (mon.0) 91 : cluster [DBG] mgrmap e10: vm06.cofomf(active, since 1.02958s) 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: audit 2026-03-10T12:42:54.405088+0000 mgr.vm06.cofomf (mgr.14162) 7 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: audit 2026-03-10T12:42:54.405088+0000 mgr.vm06.cofomf (mgr.14162) 7 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: audit 2026-03-10T12:42:54.676514+0000 mgr.vm06.cofomf (mgr.14162) 8 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: audit 2026-03-10T12:42:54.676514+0000 mgr.vm06.cofomf (mgr.14162) 8 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: audit 2026-03-10T12:42:54.706480+0000 mon.vm06 (mon.0) 92 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: audit 2026-03-10T12:42:54.706480+0000 mon.vm06 (mon.0) 92 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: audit 2026-03-10T12:42:54.708966+0000 mon.vm06 (mon.0) 93 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: audit 2026-03-10T12:42:54.708966+0000 mon.vm06 (mon.0) 93 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: audit 2026-03-10T12:42:55.140966+0000 mon.vm06 (mon.0) 94 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:42:55.429 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:55 vm06 bash[17497]: audit 2026-03-10T12:42:55.140966+0000 mon.vm06 (mon.0) 94 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:42:55.461 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T12:42:55.461 INFO:teuthology.orchestra.run.vm06.stdout:firewalld does not appear to be present 2026-03-10T12:42:55.461 INFO:teuthology.orchestra.run.vm06.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T12:42:55.462 INFO:teuthology.orchestra.run.vm06.stdout:Ceph Dashboard is now available at: 2026-03-10T12:42:55.462 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:42:55.462 INFO:teuthology.orchestra.run.vm06.stdout: URL: https://vm06.local:8443/ 2026-03-10T12:42:55.462 INFO:teuthology.orchestra.run.vm06.stdout: User: admin 2026-03-10T12:42:55.462 INFO:teuthology.orchestra.run.vm06.stdout: Password: eq1nuklz8b 2026-03-10T12:42:55.463 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:42:55.463 INFO:teuthology.orchestra.run.vm06.stdout:Saving cluster configuration to /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config directory 2026-03-10T12:42:55.789 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout: ceph telemetry on 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout:For more information see: 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:42:55.790 INFO:teuthology.orchestra.run.vm06.stdout:Bootstrap complete. 2026-03-10T12:42:55.818 INFO:tasks.cephadm:Fetching config... 2026-03-10T12:42:55.818 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T12:42:55.818 DEBUG:teuthology.orchestra.run.vm06:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T12:42:55.821 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T12:42:55.822 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T12:42:55.822 DEBUG:teuthology.orchestra.run.vm06:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T12:42:55.869 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T12:42:55.869 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T12:42:55.869 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/keyring of=/dev/stdout 2026-03-10T12:42:55.923 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T12:42:55.924 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T12:42:55.924 DEBUG:teuthology.orchestra.run.vm06:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T12:42:55.968 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T12:42:55.968 DEBUG:teuthology.orchestra.run.vm06:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM6z2W/wHmk6wvUo2R8g2juiLF+UHM/ZRV1YTA9vIads ceph-68e2be40-1c7e-11f1-b779-df2955349a39' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T12:42:56.020 INFO:teuthology.orchestra.run.vm06.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM6z2W/wHmk6wvUo2R8g2juiLF+UHM/ZRV1YTA9vIads ceph-68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:56.025 DEBUG:teuthology.orchestra.run.vm09:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM6z2W/wHmk6wvUo2R8g2juiLF+UHM/ZRV1YTA9vIads ceph-68e2be40-1c7e-11f1-b779-df2955349a39' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T12:42:56.037 INFO:teuthology.orchestra.run.vm09.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM6z2W/wHmk6wvUo2R8g2juiLF+UHM/ZRV1YTA9vIads ceph-68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:42:56.043 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T12:42:56.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:56 vm06 bash[17497]: audit 2026-03-10T12:42:54.988372+0000 mgr.vm06.cofomf (mgr.14162) 9 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:56.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:56 vm06 bash[17497]: audit 2026-03-10T12:42:54.988372+0000 mgr.vm06.cofomf (mgr.14162) 9 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:42:56.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:56 vm06 bash[17497]: audit 2026-03-10T12:42:55.419545+0000 mon.vm06 (mon.0) 95 : audit [DBG] from='client.? 192.168.123.106:0/1897105183' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T12:42:56.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:56 vm06 bash[17497]: audit 2026-03-10T12:42:55.419545+0000 mon.vm06 (mon.0) 95 : audit [DBG] from='client.? 192.168.123.106:0/1897105183' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T12:42:56.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:56 vm06 bash[17497]: audit 2026-03-10T12:42:55.745518+0000 mon.vm06 (mon.0) 96 : audit [INF] from='client.? 192.168.123.106:0/87275347' entity='client.admin' 2026-03-10T12:42:56.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:56 vm06 bash[17497]: audit 2026-03-10T12:42:55.745518+0000 mon.vm06 (mon.0) 96 : audit [INF] from='client.? 192.168.123.106:0/87275347' entity='client.admin' 2026-03-10T12:42:56.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:56 vm06 bash[17497]: cluster 2026-03-10T12:42:56.145169+0000 mon.vm06 (mon.0) 97 : cluster [DBG] mgrmap e11: vm06.cofomf(active, since 2s) 2026-03-10T12:42:56.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:56 vm06 bash[17497]: cluster 2026-03-10T12:42:56.145169+0000 mon.vm06 (mon.0) 97 : cluster [DBG] mgrmap e11: vm06.cofomf(active, since 2s) 2026-03-10T12:42:59.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:59 vm06 bash[17497]: audit 2026-03-10T12:42:58.487481+0000 mon.vm06 (mon.0) 98 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:42:59.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:59 vm06 bash[17497]: audit 2026-03-10T12:42:58.487481+0000 mon.vm06 (mon.0) 98 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:42:59.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:59 vm06 bash[17497]: audit 2026-03-10T12:42:59.160406+0000 mon.vm06 (mon.0) 99 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:42:59.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:42:59 vm06 bash[17497]: audit 2026-03-10T12:42:59.160406+0000 mon.vm06 (mon.0) 99 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:00.320 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:43:00.647 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T12:43:00.647 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T12:43:01.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:01 vm06 bash[17497]: cluster 2026-03-10T12:43:00.163976+0000 mon.vm06 (mon.0) 100 : cluster [DBG] mgrmap e12: vm06.cofomf(active, since 6s) 2026-03-10T12:43:01.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:01 vm06 bash[17497]: cluster 2026-03-10T12:43:00.163976+0000 mon.vm06 (mon.0) 100 : cluster [DBG] mgrmap e12: vm06.cofomf(active, since 6s) 2026-03-10T12:43:01.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:01 vm06 bash[17497]: audit 2026-03-10T12:43:00.578708+0000 mon.vm06 (mon.0) 101 : audit [INF] from='client.? 192.168.123.106:0/988302703' entity='client.admin' 2026-03-10T12:43:01.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:01 vm06 bash[17497]: audit 2026-03-10T12:43:00.578708+0000 mon.vm06 (mon.0) 101 : audit [INF] from='client.? 192.168.123.106:0/988302703' entity='client.admin' 2026-03-10T12:43:05.342 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:43:05.682 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:05.682 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.872569+0000 mon.vm06 (mon.0) 102 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.872569+0000 mon.vm06 (mon.0) 102 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.875412+0000 mon.vm06 (mon.0) 103 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.875412+0000 mon.vm06 (mon.0) 103 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.876123+0000 mon.vm06 (mon.0) 104 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.876123+0000 mon.vm06 (mon.0) 104 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.879063+0000 mon.vm06 (mon.0) 105 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.879063+0000 mon.vm06 (mon.0) 105 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.880144+0000 mon.vm06 (mon.0) 106 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm06", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.880144+0000 mon.vm06 (mon.0) 106 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm06", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.880985+0000 mon.vm06 (mon.0) 107 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm06", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.880985+0000 mon.vm06 (mon.0) 107 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm06", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.882309+0000 mon.vm06 (mon.0) 108 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:04.882309+0000 mon.vm06 (mon.0) 108 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: cephadm 2026-03-10T12:43:04.882944+0000 mgr.vm06.cofomf (mgr.14162) 10 : cephadm [INF] Deploying daemon ceph-exporter.vm06 on vm06 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: cephadm 2026-03-10T12:43:04.882944+0000 mgr.vm06.cofomf (mgr.14162) 10 : cephadm [INF] Deploying daemon ceph-exporter.vm06 on vm06 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.739344+0000 mon.vm06 (mon.0) 109 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.739344+0000 mon.vm06 (mon.0) 109 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.745253+0000 mon.vm06 (mon.0) 110 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.745253+0000 mon.vm06 (mon.0) 110 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.748375+0000 mon.vm06 (mon.0) 111 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.748375+0000 mon.vm06 (mon.0) 111 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.751907+0000 mon.vm06 (mon.0) 112 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.751907+0000 mon.vm06 (mon.0) 112 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.753069+0000 mon.vm06 (mon.0) 113 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm06", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.753069+0000 mon.vm06 (mon.0) 113 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm06", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.754826+0000 mon.vm06 (mon.0) 114 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm06", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.754826+0000 mon.vm06 (mon.0) 114 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm06", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.756903+0000 mon.vm06 (mon.0) 115 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: audit 2026-03-10T12:43:05.756903+0000 mon.vm06 (mon.0) 115 : audit [DBG] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: cephadm 2026-03-10T12:43:05.757887+0000 mgr.vm06.cofomf (mgr.14162) 11 : cephadm [INF] Deploying daemon crash.vm06 on vm06 2026-03-10T12:43:05.957 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:05 vm06 bash[17497]: cephadm 2026-03-10T12:43:05.757887+0000 mgr.vm06.cofomf (mgr.14162) 11 : cephadm [INF] Deploying daemon crash.vm06 on vm06 2026-03-10T12:43:06.026 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm09 2026-03-10T12:43:06.026 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:43:06.026 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.conf 2026-03-10T12:43:06.030 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:43:06.030 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:43:06.074 INFO:tasks.cephadm:Adding host vm09 to orchestrator... 2026-03-10T12:43:06.074 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph orch host add vm09 2026-03-10T12:43:06.516 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:06.797 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:07.057 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: audit 2026-03-10T12:43:05.944700+0000 mgr.vm06.cofomf (mgr.14162) 12 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:43:07.057 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: audit 2026-03-10T12:43:05.944700+0000 mgr.vm06.cofomf (mgr.14162) 12 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:43:07.057 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: audit 2026-03-10T12:43:05.947904+0000 mon.vm06 (mon.0) 116 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:07.057 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: audit 2026-03-10T12:43:05.947904+0000 mon.vm06 (mon.0) 116 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:07.057 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: audit 2026-03-10T12:43:06.619974+0000 mon.vm06 (mon.0) 117 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:07.057 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: audit 2026-03-10T12:43:06.619974+0000 mon.vm06 (mon.0) 117 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:07.057 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: audit 2026-03-10T12:43:06.623575+0000 mon.vm06 (mon.0) 118 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:07.057 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: audit 2026-03-10T12:43:06.623575+0000 mon.vm06 (mon.0) 118 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:07.057 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: audit 2026-03-10T12:43:06.628032+0000 mon.vm06 (mon.0) 119 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:07.057 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: audit 2026-03-10T12:43:06.628032+0000 mon.vm06 (mon.0) 119 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:07.057 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: audit 2026-03-10T12:43:06.631996+0000 mon.vm06 (mon.0) 120 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:07.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: audit 2026-03-10T12:43:06.631996+0000 mon.vm06 (mon.0) 120 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:07.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: cephadm 2026-03-10T12:43:06.633349+0000 mgr.vm06.cofomf (mgr.14162) 13 : cephadm [INF] Deploying daemon node-exporter.vm06 on vm06 2026-03-10T12:43:07.058 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:06 vm06 bash[17497]: cephadm 2026-03-10T12:43:06.633349+0000 mgr.vm06.cofomf (mgr.14162) 13 : cephadm [INF] Deploying daemon node-exporter.vm06 on vm06 2026-03-10T12:43:07.315 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:07 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:07.315 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:07 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:08.784 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:08 vm06 bash[17497]: audit 2026-03-10T12:43:07.352799+0000 mon.vm06 (mon.0) 121 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:08.785 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:08 vm06 bash[17497]: audit 2026-03-10T12:43:07.352799+0000 mon.vm06 (mon.0) 121 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:08.785 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:08 vm06 bash[17497]: audit 2026-03-10T12:43:07.356574+0000 mon.vm06 (mon.0) 122 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:08.785 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:08 vm06 bash[17497]: audit 2026-03-10T12:43:07.356574+0000 mon.vm06 (mon.0) 122 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:08.785 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:08 vm06 bash[17497]: audit 2026-03-10T12:43:07.359929+0000 mon.vm06 (mon.0) 123 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:08.785 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:08 vm06 bash[17497]: audit 2026-03-10T12:43:07.359929+0000 mon.vm06 (mon.0) 123 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:08.785 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:08 vm06 bash[17497]: audit 2026-03-10T12:43:07.363277+0000 mon.vm06 (mon.0) 124 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:08.785 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:08 vm06 bash[17497]: audit 2026-03-10T12:43:07.363277+0000 mon.vm06 (mon.0) 124 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:08.785 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:08 vm06 bash[17497]: cephadm 2026-03-10T12:43:07.369431+0000 mgr.vm06.cofomf (mgr.14162) 14 : cephadm [INF] Deploying daemon alertmanager.vm06 on vm06 2026-03-10T12:43:08.785 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:08 vm06 bash[17497]: cephadm 2026-03-10T12:43:07.369431+0000 mgr.vm06.cofomf (mgr.14162) 14 : cephadm [INF] Deploying daemon alertmanager.vm06 on vm06 2026-03-10T12:43:09.689 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:09 vm06 bash[17497]: audit 2026-03-10T12:43:08.414091+0000 mon.vm06 (mon.0) 125 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:09.689 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:09 vm06 bash[17497]: audit 2026-03-10T12:43:08.414091+0000 mon.vm06 (mon.0) 125 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:11.729 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:43:11.968 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:11 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:12.242 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:12 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.147393+0000 mgr.vm06.cofomf (mgr.14162) 15 : audit [DBG] from='client.14187 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.147393+0000 mgr.vm06.cofomf (mgr.14162) 15 : audit [DBG] from='client.14187 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.241442+0000 mon.vm06 (mon.0) 126 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.241442+0000 mon.vm06 (mon.0) 126 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.248731+0000 mon.vm06 (mon.0) 127 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.248731+0000 mon.vm06 (mon.0) 127 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.256446+0000 mon.vm06 (mon.0) 128 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.256446+0000 mon.vm06 (mon.0) 128 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.259039+0000 mon.vm06 (mon.0) 129 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.259039+0000 mon.vm06 (mon.0) 129 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.261159+0000 mon.vm06 (mon.0) 130 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.261159+0000 mon.vm06 (mon.0) 130 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.262759+0000 mon.vm06 (mon.0) 131 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.262759+0000 mon.vm06 (mon.0) 131 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: cephadm 2026-03-10T12:43:12.267065+0000 mgr.vm06.cofomf (mgr.14162) 16 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: cephadm 2026-03-10T12:43:12.267065+0000 mgr.vm06.cofomf (mgr.14162) 16 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.299445+0000 mon.vm06 (mon.0) 132 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.299445+0000 mon.vm06 (mon.0) 132 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.302029+0000 mon.vm06 (mon.0) 133 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.302029+0000 mon.vm06 (mon.0) 133 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.303015+0000 mon.vm06 (mon.0) 134 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.303015+0000 mon.vm06 (mon.0) 134 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.303293+0000 mgr.vm06.cofomf (mgr.14162) 17 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.303293+0000 mgr.vm06.cofomf (mgr.14162) 17 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.306123+0000 mon.vm06 (mon.0) 135 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: audit 2026-03-10T12:43:12.306123+0000 mon.vm06 (mon.0) 135 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:13.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: cephadm 2026-03-10T12:43:12.312867+0000 mgr.vm06.cofomf (mgr.14162) 18 : cephadm [INF] Deploying daemon grafana.vm06 on vm06 2026-03-10T12:43:13.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: cephadm 2026-03-10T12:43:12.312867+0000 mgr.vm06.cofomf (mgr.14162) 18 : cephadm [INF] Deploying daemon grafana.vm06 on vm06 2026-03-10T12:43:13.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: cephadm 2026-03-10T12:43:12.784247+0000 mgr.vm06.cofomf (mgr.14162) 19 : cephadm [INF] Deploying cephadm binary to vm09 2026-03-10T12:43:13.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:13 vm06 bash[17497]: cephadm 2026-03-10T12:43:12.784247+0000 mgr.vm06.cofomf (mgr.14162) 19 : cephadm [INF] Deploying cephadm binary to vm09 2026-03-10T12:43:14.091 INFO:teuthology.orchestra.run.vm06.stdout:Added host 'vm09' with addr '192.168.123.109' 2026-03-10T12:43:14.160 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph orch host ls --format=json 2026-03-10T12:43:14.424 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:14 vm06 bash[17497]: cluster 2026-03-10T12:43:13.385983+0000 mgr.vm06.cofomf (mgr.14162) 20 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:14.424 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:14 vm06 bash[17497]: cluster 2026-03-10T12:43:13.385983+0000 mgr.vm06.cofomf (mgr.14162) 20 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:14.424 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:14 vm06 bash[17497]: audit 2026-03-10T12:43:13.423420+0000 mon.vm06 (mon.0) 136 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:14.424 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:14 vm06 bash[17497]: audit 2026-03-10T12:43:13.423420+0000 mon.vm06 (mon.0) 136 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:14.424 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:14 vm06 bash[17497]: audit 2026-03-10T12:43:14.091852+0000 mon.vm06 (mon.0) 137 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:14.424 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:14 vm06 bash[17497]: audit 2026-03-10T12:43:14.091852+0000 mon.vm06 (mon.0) 137 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:15.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:15 vm06 bash[17497]: cephadm 2026-03-10T12:43:14.092273+0000 mgr.vm06.cofomf (mgr.14162) 21 : cephadm [INF] Added host vm09 2026-03-10T12:43:15.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:15 vm06 bash[17497]: cephadm 2026-03-10T12:43:14.092273+0000 mgr.vm06.cofomf (mgr.14162) 21 : cephadm [INF] Added host vm09 2026-03-10T12:43:16.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:16 vm06 bash[17497]: cluster 2026-03-10T12:43:15.386163+0000 mgr.vm06.cofomf (mgr.14162) 22 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:16.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:16 vm06 bash[17497]: cluster 2026-03-10T12:43:15.386163+0000 mgr.vm06.cofomf (mgr.14162) 22 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:18.785 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:43:18.810 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:18 vm06 bash[17497]: cluster 2026-03-10T12:43:17.386417+0000 mgr.vm06.cofomf (mgr.14162) 23 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:18.811 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:18 vm06 bash[17497]: cluster 2026-03-10T12:43:17.386417+0000 mgr.vm06.cofomf (mgr.14162) 23 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:19.919 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:43:19.919 INFO:teuthology.orchestra.run.vm06.stdout:[{"addr": "192.168.123.106", "hostname": "vm06", "labels": [], "status": ""}, {"addr": "192.168.123.109", "hostname": "vm09", "labels": [], "status": ""}] 2026-03-10T12:43:20.069 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T12:43:20.069 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd crush tunables default 2026-03-10T12:43:20.724 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:20 vm06 bash[17497]: cluster 2026-03-10T12:43:19.386626+0000 mgr.vm06.cofomf (mgr.14162) 24 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:20.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:20 vm06 bash[17497]: cluster 2026-03-10T12:43:19.386626+0000 mgr.vm06.cofomf (mgr.14162) 24 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:21.261 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:20 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:21.262 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:19.919943+0000 mgr.vm06.cofomf (mgr.14162) 25 : audit [DBG] from='client.14189 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:19.919943+0000 mgr.vm06.cofomf (mgr.14162) 25 : audit [DBG] from='client.14189 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.288730+0000 mon.vm06 (mon.0) 138 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.288730+0000 mon.vm06 (mon.0) 138 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.292800+0000 mon.vm06 (mon.0) 139 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.292800+0000 mon.vm06 (mon.0) 139 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.296415+0000 mon.vm06 (mon.0) 140 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.296415+0000 mon.vm06 (mon.0) 140 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.299792+0000 mon.vm06 (mon.0) 141 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.299792+0000 mon.vm06 (mon.0) 141 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.303834+0000 mon.vm06 (mon.0) 142 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.303834+0000 mon.vm06 (mon.0) 142 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.318962+0000 mon.vm06 (mon.0) 143 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.318962+0000 mon.vm06 (mon.0) 143 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.331013+0000 mon.vm06 (mon.0) 144 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.331013+0000 mon.vm06 (mon.0) 144 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.333768+0000 mon.vm06 (mon.0) 145 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:21.513 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:21 vm06 bash[17497]: audit 2026-03-10T12:43:21.333768+0000 mon.vm06 (mon.0) 145 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:22.596 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:22 vm06 bash[17497]: cluster 2026-03-10T12:43:21.386786+0000 mgr.vm06.cofomf (mgr.14162) 26 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:22.596 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:22 vm06 bash[17497]: cluster 2026-03-10T12:43:21.386786+0000 mgr.vm06.cofomf (mgr.14162) 26 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:22.596 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:22 vm06 bash[17497]: cephadm 2026-03-10T12:43:21.533934+0000 mgr.vm06.cofomf (mgr.14162) 27 : cephadm [INF] Deploying daemon prometheus.vm06 on vm06 2026-03-10T12:43:22.596 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:22 vm06 bash[17497]: cephadm 2026-03-10T12:43:21.533934+0000 mgr.vm06.cofomf (mgr.14162) 27 : cephadm [INF] Deploying daemon prometheus.vm06 on vm06 2026-03-10T12:43:24.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:24 vm06 bash[17497]: cluster 2026-03-10T12:43:23.387445+0000 mgr.vm06.cofomf (mgr.14162) 28 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:24.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:24 vm06 bash[17497]: cluster 2026-03-10T12:43:23.387445+0000 mgr.vm06.cofomf (mgr.14162) 28 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:24.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:24 vm06 bash[17497]: audit 2026-03-10T12:43:23.437303+0000 mon.vm06 (mon.0) 146 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:24.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:24 vm06 bash[17497]: audit 2026-03-10T12:43:23.437303+0000 mon.vm06 (mon.0) 146 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:25.707 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:43:26.773 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:26 vm06 bash[17497]: cluster 2026-03-10T12:43:25.387660+0000 mgr.vm06.cofomf (mgr.14162) 29 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:26.773 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:26 vm06 bash[17497]: cluster 2026-03-10T12:43:25.387660+0000 mgr.vm06.cofomf (mgr.14162) 29 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:27.447 INFO:teuthology.orchestra.run.vm06.stderr:adjusted tunables profile to default 2026-03-10T12:43:27.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:27 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:27.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:27 vm06 bash[17497]: audit 2026-03-10T12:43:26.457319+0000 mon.vm06 (mon.0) 147 : audit [INF] from='client.? 192.168.123.106:0/462372146' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T12:43:27.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:27 vm06 bash[17497]: audit 2026-03-10T12:43:26.457319+0000 mon.vm06 (mon.0) 147 : audit [INF] from='client.? 192.168.123.106:0/462372146' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T12:43:27.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:27 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:27.684 INFO:tasks.cephadm:Adding mon.vm06 on vm06 2026-03-10T12:43:27.684 INFO:tasks.cephadm:Adding mon.vm09 on vm09 2026-03-10T12:43:27.684 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph orch apply mon '2;vm06:192.168.123.106=vm06;vm09:192.168.123.109=vm09' 2026-03-10T12:43:28.699 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: cluster 2026-03-10T12:43:27.387862+0000 mgr.vm06.cofomf (mgr.14162) 30 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:28.699 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: cluster 2026-03-10T12:43:27.387862+0000 mgr.vm06.cofomf (mgr.14162) 30 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:28.699 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: audit 2026-03-10T12:43:27.447685+0000 mon.vm06 (mon.0) 148 : audit [INF] from='client.? 192.168.123.106:0/462372146' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: audit 2026-03-10T12:43:27.447685+0000 mon.vm06 (mon.0) 148 : audit [INF] from='client.? 192.168.123.106:0/462372146' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: cluster 2026-03-10T12:43:27.458929+0000 mon.vm06 (mon.0) 149 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: cluster 2026-03-10T12:43:27.458929+0000 mon.vm06 (mon.0) 149 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: audit 2026-03-10T12:43:27.653141+0000 mon.vm06 (mon.0) 150 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: audit 2026-03-10T12:43:27.653141+0000 mon.vm06 (mon.0) 150 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: audit 2026-03-10T12:43:27.657425+0000 mon.vm06 (mon.0) 151 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: audit 2026-03-10T12:43:27.657425+0000 mon.vm06 (mon.0) 151 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: audit 2026-03-10T12:43:27.669059+0000 mon.vm06 (mon.0) 152 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: audit 2026-03-10T12:43:27.669059+0000 mon.vm06 (mon.0) 152 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: audit 2026-03-10T12:43:27.674998+0000 mon.vm06 (mon.0) 153 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: audit 2026-03-10T12:43:27.674998+0000 mon.vm06 (mon.0) 153 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: audit 2026-03-10T12:43:28.441801+0000 mon.vm06 (mon.0) 154 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:28.700 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:28 vm06 bash[17497]: audit 2026-03-10T12:43:28.441801+0000 mon.vm06 (mon.0) 154 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' 2026-03-10T12:43:28.803 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:43:29.824 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:43:30.036 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:29 vm06 bash[17497]: audit 2026-03-10T12:43:28.669489+0000 mon.vm06 (mon.0) 155 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T12:43:30.036 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:29 vm06 bash[17497]: audit 2026-03-10T12:43:28.669489+0000 mon.vm06 (mon.0) 155 : audit [INF] from='mgr.14162 192.168.123.106:0/1139135516' entity='mgr.vm06.cofomf' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T12:43:30.036 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:29 vm06 bash[17497]: cluster 2026-03-10T12:43:28.673051+0000 mon.vm06 (mon.0) 156 : cluster [DBG] mgrmap e13: vm06.cofomf(active, since 35s) 2026-03-10T12:43:30.036 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:29 vm06 bash[17497]: cluster 2026-03-10T12:43:28.673051+0000 mon.vm06 (mon.0) 156 : cluster [DBG] mgrmap e13: vm06.cofomf(active, since 35s) 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: cluster 2026-03-10T12:43:31.922746+0000 mon.vm06 (mon.0) 157 : cluster [INF] Active manager daemon vm06.cofomf restarted 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: cluster 2026-03-10T12:43:31.922746+0000 mon.vm06 (mon.0) 157 : cluster [INF] Active manager daemon vm06.cofomf restarted 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: cluster 2026-03-10T12:43:31.923171+0000 mon.vm06 (mon.0) 158 : cluster [INF] Activating manager daemon vm06.cofomf 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: cluster 2026-03-10T12:43:31.923171+0000 mon.vm06 (mon.0) 158 : cluster [INF] Activating manager daemon vm06.cofomf 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: cluster 2026-03-10T12:43:31.928890+0000 mon.vm06 (mon.0) 159 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: cluster 2026-03-10T12:43:31.928890+0000 mon.vm06 (mon.0) 159 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: cluster 2026-03-10T12:43:31.929020+0000 mon.vm06 (mon.0) 160 : cluster [DBG] mgrmap e14: vm06.cofomf(active, starting, since 0.00594893s) 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: cluster 2026-03-10T12:43:31.929020+0000 mon.vm06 (mon.0) 160 : cluster [DBG] mgrmap e14: vm06.cofomf(active, starting, since 0.00594893s) 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.931154+0000 mon.vm06 (mon.0) 161 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.931154+0000 mon.vm06 (mon.0) 161 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.931527+0000 mon.vm06 (mon.0) 162 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr metadata", "who": "vm06.cofomf", "id": "vm06.cofomf"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.931527+0000 mon.vm06 (mon.0) 162 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr metadata", "who": "vm06.cofomf", "id": "vm06.cofomf"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.932822+0000 mon.vm06 (mon.0) 163 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.932822+0000 mon.vm06 (mon.0) 163 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.933054+0000 mon.vm06 (mon.0) 164 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.933054+0000 mon.vm06 (mon.0) 164 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.933289+0000 mon.vm06 (mon.0) 165 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.933289+0000 mon.vm06 (mon.0) 165 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: cluster 2026-03-10T12:43:31.939229+0000 mon.vm06 (mon.0) 166 : cluster [INF] Manager daemon vm06.cofomf is now available 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: cluster 2026-03-10T12:43:31.939229+0000 mon.vm06 (mon.0) 166 : cluster [INF] Manager daemon vm06.cofomf is now available 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.955265+0000 mon.vm06 (mon.0) 167 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.955265+0000 mon.vm06 (mon.0) 167 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.956778+0000 mon.vm06 (mon.0) 168 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.956778+0000 mon.vm06 (mon.0) 168 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.972166+0000 mon.vm06 (mon.0) 169 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:43:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:31 vm06 bash[17497]: audit 2026-03-10T12:43:31.972166+0000 mon.vm06 (mon.0) 169 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:43:32.939 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mon update... 2026-03-10T12:43:33.006 DEBUG:teuthology.orchestra.run.vm09:mon.vm09> sudo journalctl -f -n 0 -u ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mon.vm09.service 2026-03-10T12:43:33.007 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:43:33.007 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph mon dump -f json 2026-03-10T12:43:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:32 vm06 bash[17497]: audit 2026-03-10T12:43:31.989731+0000 mon.vm06 (mon.0) 170 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:43:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:32 vm06 bash[17497]: audit 2026-03-10T12:43:31.989731+0000 mon.vm06 (mon.0) 170 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:43:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:32 vm06 bash[17497]: audit 2026-03-10T12:43:32.003311+0000 mon.vm06 (mon.0) 171 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/trash_purge_schedule"}]: dispatch 2026-03-10T12:43:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:32 vm06 bash[17497]: audit 2026-03-10T12:43:32.003311+0000 mon.vm06 (mon.0) 171 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm06.cofomf/trash_purge_schedule"}]: dispatch 2026-03-10T12:43:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:32 vm06 bash[17497]: audit 2026-03-10T12:43:32.423633+0000 mon.vm06 (mon.0) 172 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:32 vm06 bash[17497]: audit 2026-03-10T12:43:32.423633+0000 mon.vm06 (mon.0) 172 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:32 vm06 bash[17497]: cluster 2026-03-10T12:43:32.935430+0000 mon.vm06 (mon.0) 173 : cluster [DBG] mgrmap e15: vm06.cofomf(active, since 1.01235s) 2026-03-10T12:43:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:32 vm06 bash[17497]: cluster 2026-03-10T12:43:32.935430+0000 mon.vm06 (mon.0) 173 : cluster [DBG] mgrmap e15: vm06.cofomf(active, since 1.01235s) 2026-03-10T12:43:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:32 vm06 bash[17497]: audit 2026-03-10T12:43:32.939463+0000 mon.vm06 (mon.0) 174 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:32 vm06 bash[17497]: audit 2026-03-10T12:43:32.939463+0000 mon.vm06 (mon.0) 174 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:34.162 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:43:34.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:33 vm06 bash[17497]: cephadm 2026-03-10T12:43:32.936100+0000 mgr.vm06.cofomf (mgr.14193) 2 : cephadm [INF] Saving service mon spec with placement vm06:192.168.123.106=vm06;vm09:192.168.123.109=vm09;count:2 2026-03-10T12:43:34.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:33 vm06 bash[17497]: cephadm 2026-03-10T12:43:32.936100+0000 mgr.vm06.cofomf (mgr.14193) 2 : cephadm [INF] Saving service mon spec with placement vm06:192.168.123.106=vm06;vm09:192.168.123.109=vm09;count:2 2026-03-10T12:43:35.191 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: cephadm 2026-03-10T12:43:33.967297+0000 mgr.vm06.cofomf (mgr.14193) 3 : cephadm [INF] [10/Mar/2026:12:43:33] ENGINE Bus STARTING 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: cephadm 2026-03-10T12:43:33.967297+0000 mgr.vm06.cofomf (mgr.14193) 3 : cephadm [INF] [10/Mar/2026:12:43:33] ENGINE Bus STARTING 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: cluster 2026-03-10T12:43:33.990433+0000 mon.vm06 (mon.0) 175 : cluster [DBG] mgrmap e16: vm06.cofomf(active, since 2s) 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: cluster 2026-03-10T12:43:33.990433+0000 mon.vm06 (mon.0) 175 : cluster [DBG] mgrmap e16: vm06.cofomf(active, since 2s) 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: audit 2026-03-10T12:43:34.019265+0000 mon.vm06 (mon.0) 176 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: audit 2026-03-10T12:43:34.019265+0000 mon.vm06 (mon.0) 176 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: cephadm 2026-03-10T12:43:34.077528+0000 mgr.vm06.cofomf (mgr.14193) 4 : cephadm [INF] [10/Mar/2026:12:43:34] ENGINE Serving on https://192.168.123.106:7150 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: cephadm 2026-03-10T12:43:34.077528+0000 mgr.vm06.cofomf (mgr.14193) 4 : cephadm [INF] [10/Mar/2026:12:43:34] ENGINE Serving on https://192.168.123.106:7150 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: cephadm 2026-03-10T12:43:34.078024+0000 mgr.vm06.cofomf (mgr.14193) 5 : cephadm [INF] [10/Mar/2026:12:43:34] ENGINE Client ('192.168.123.106', 44278) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: cephadm 2026-03-10T12:43:34.078024+0000 mgr.vm06.cofomf (mgr.14193) 5 : cephadm [INF] [10/Mar/2026:12:43:34] ENGINE Client ('192.168.123.106', 44278) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: cephadm 2026-03-10T12:43:34.178490+0000 mgr.vm06.cofomf (mgr.14193) 6 : cephadm [INF] [10/Mar/2026:12:43:34] ENGINE Serving on http://192.168.123.106:8765 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: cephadm 2026-03-10T12:43:34.178490+0000 mgr.vm06.cofomf (mgr.14193) 6 : cephadm [INF] [10/Mar/2026:12:43:34] ENGINE Serving on http://192.168.123.106:8765 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: cephadm 2026-03-10T12:43:34.178823+0000 mgr.vm06.cofomf (mgr.14193) 7 : cephadm [INF] [10/Mar/2026:12:43:34] ENGINE Bus STARTED 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: cephadm 2026-03-10T12:43:34.178823+0000 mgr.vm06.cofomf (mgr.14193) 7 : cephadm [INF] [10/Mar/2026:12:43:34] ENGINE Bus STARTED 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: audit 2026-03-10T12:43:34.642047+0000 mon.vm06 (mon.0) 177 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:35.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:34 vm06 bash[17497]: audit 2026-03-10T12:43:34.642047+0000 mon.vm06 (mon.0) 177 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:35.481 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:43:35.481 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:43:35.481 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"68e2be40-1c7e-11f1-b779-df2955349a39","modified":"2026-03-10T12:42:28.753887Z","created":"2026-03-10T12:42:28.753887Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm06","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:3300","nonce":0},{"type":"v1","addr":"192.168.123.106:6789","nonce":0}]},"addr":"192.168.123.106:6789/0","public_addr":"192.168.123.106:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:43:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:35 vm06 bash[17497]: audit 2026-03-10T12:43:35.481764+0000 mon.vm06 (mon.0) 178 : audit [DBG] from='client.? 192.168.123.109:0/2530184119' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:43:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:35 vm06 bash[17497]: audit 2026-03-10T12:43:35.481764+0000 mon.vm06 (mon.0) 178 : audit [DBG] from='client.? 192.168.123.109:0/2530184119' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:43:36.534 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:43:36.534 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph mon dump -f json 2026-03-10T12:43:38.528 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:38.825 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:43:38.825 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"68e2be40-1c7e-11f1-b779-df2955349a39","modified":"2026-03-10T12:42:28.753887Z","created":"2026-03-10T12:42:28.753887Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm06","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:3300","nonce":0},{"type":"v1","addr":"192.168.123.106:6789","nonce":0}]},"addr":"192.168.123.106:6789/0","public_addr":"192.168.123.106:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:43:38.825 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.508506+0000 mon.vm06 (mon.0) 179 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.508506+0000 mon.vm06 (mon.0) 179 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.512423+0000 mon.vm06 (mon.0) 180 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.512423+0000 mon.vm06 (mon.0) 180 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.553113+0000 mon.vm06 (mon.0) 181 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.553113+0000 mon.vm06 (mon.0) 181 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.555837+0000 mon.vm06 (mon.0) 182 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.555837+0000 mon.vm06 (mon.0) 182 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.558991+0000 mon.vm06 (mon.0) 183 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.558991+0000 mon.vm06 (mon.0) 183 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.561208+0000 mon.vm06 (mon.0) 184 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.561208+0000 mon.vm06 (mon.0) 184 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.561674+0000 mon.vm06 (mon.0) 185 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:37.561674+0000 mon.vm06 (mon.0) 185 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.183614+0000 mon.vm06 (mon.0) 186 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.183614+0000 mon.vm06 (mon.0) 186 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.288574+0000 mon.vm06 (mon.0) 187 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.288574+0000 mon.vm06 (mon.0) 187 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.289663+0000 mon.vm06 (mon.0) 188 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.289663+0000 mon.vm06 (mon.0) 188 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.290299+0000 mon.vm06 (mon.0) 189 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.290299+0000 mon.vm06 (mon.0) 189 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.290698+0000 mon.vm06 (mon.0) 190 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.290698+0000 mon.vm06 (mon.0) 190 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.463702+0000 mon.vm06 (mon.0) 191 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.463702+0000 mon.vm06 (mon.0) 191 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.467004+0000 mon.vm06 (mon.0) 192 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.467004+0000 mon.vm06 (mon.0) 192 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.470634+0000 mon.vm06 (mon.0) 193 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.470634+0000 mon.vm06 (mon.0) 193 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.473434+0000 mon.vm06 (mon.0) 194 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.473434+0000 mon.vm06 (mon.0) 194 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.475940+0000 mon.vm06 (mon.0) 195 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.475940+0000 mon.vm06 (mon.0) 195 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.476993+0000 mon.vm06 (mon.0) 196 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.476993+0000 mon.vm06 (mon.0) 196 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.478187+0000 mon.vm06 (mon.0) 197 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.478187+0000 mon.vm06 (mon.0) 197 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.479584+0000 mon.vm06 (mon.0) 198 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:38.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:38 vm06 bash[17497]: audit 2026-03-10T12:43:38.479584+0000 mon.vm06 (mon.0) 198 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.291289+0000 mgr.vm06.cofomf (mgr.14193) 8 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.291289+0000 mgr.vm06.cofomf (mgr.14193) 8 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.291491+0000 mgr.vm06.cofomf (mgr.14193) 9 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.291491+0000 mgr.vm06.cofomf (mgr.14193) 9 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.329323+0000 mgr.vm06.cofomf (mgr.14193) 10 : cephadm [INF] Updating vm06:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.329323+0000 mgr.vm06.cofomf (mgr.14193) 10 : cephadm [INF] Updating vm06:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.333918+0000 mgr.vm06.cofomf (mgr.14193) 11 : cephadm [INF] Updating vm09:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.333918+0000 mgr.vm06.cofomf (mgr.14193) 11 : cephadm [INF] Updating vm09:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.362230+0000 mgr.vm06.cofomf (mgr.14193) 12 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.362230+0000 mgr.vm06.cofomf (mgr.14193) 12 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.371151+0000 mgr.vm06.cofomf (mgr.14193) 13 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.371151+0000 mgr.vm06.cofomf (mgr.14193) 13 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.393758+0000 mgr.vm06.cofomf (mgr.14193) 14 : cephadm [INF] Updating vm06:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.client.admin.keyring 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.393758+0000 mgr.vm06.cofomf (mgr.14193) 14 : cephadm [INF] Updating vm06:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.client.admin.keyring 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.410099+0000 mgr.vm06.cofomf (mgr.14193) 15 : cephadm [INF] Updating vm09:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.client.admin.keyring 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.410099+0000 mgr.vm06.cofomf (mgr.14193) 15 : cephadm [INF] Updating vm09:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.client.admin.keyring 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.480176+0000 mgr.vm06.cofomf (mgr.14193) 16 : cephadm [INF] Deploying daemon ceph-exporter.vm09 on vm09 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: cephadm 2026-03-10T12:43:38.480176+0000 mgr.vm06.cofomf (mgr.14193) 16 : cephadm [INF] Deploying daemon ceph-exporter.vm09 on vm09 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: audit 2026-03-10T12:43:38.825906+0000 mon.vm06 (mon.0) 199 : audit [DBG] from='client.? 192.168.123.109:0/3223024403' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:43:39.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:39 vm06 bash[17497]: audit 2026-03-10T12:43:38.825906+0000 mon.vm06 (mon.0) 199 : audit [DBG] from='client.? 192.168.123.109:0/3223024403' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:43:39.877 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:43:39.878 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph mon dump -f json 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.961378+0000 mon.vm06 (mon.0) 200 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.961378+0000 mon.vm06 (mon.0) 200 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.963780+0000 mon.vm06 (mon.0) 201 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.963780+0000 mon.vm06 (mon.0) 201 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.966366+0000 mon.vm06 (mon.0) 202 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.966366+0000 mon.vm06 (mon.0) 202 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.968473+0000 mon.vm06 (mon.0) 203 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.968473+0000 mon.vm06 (mon.0) 203 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.969364+0000 mon.vm06 (mon.0) 204 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.969364+0000 mon.vm06 (mon.0) 204 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.970499+0000 mon.vm06 (mon.0) 205 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.970499+0000 mon.vm06 (mon.0) 205 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.971851+0000 mon.vm06 (mon.0) 206 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:39.971851+0000 mon.vm06 (mon.0) 206 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: cephadm 2026-03-10T12:43:39.972289+0000 mgr.vm06.cofomf (mgr.14193) 17 : cephadm [INF] Deploying daemon crash.vm09 on vm09 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: cephadm 2026-03-10T12:43:39.972289+0000 mgr.vm06.cofomf (mgr.14193) 17 : cephadm [INF] Deploying daemon crash.vm09 on vm09 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:40.857715+0000 mon.vm06 (mon.0) 207 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:40.857715+0000 mon.vm06 (mon.0) 207 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:40.860436+0000 mon.vm06 (mon.0) 208 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:40.860436+0000 mon.vm06 (mon.0) 208 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:40.862957+0000 mon.vm06 (mon.0) 209 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:40.862957+0000 mon.vm06 (mon.0) 209 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:40.865424+0000 mon.vm06 (mon.0) 210 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:41.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:40 vm06 bash[17497]: audit 2026-03-10T12:43:40.865424+0000 mon.vm06 (mon.0) 210 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:42.566 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:42.775 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:42 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:42.995 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:43:42.996 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"68e2be40-1c7e-11f1-b779-df2955349a39","modified":"2026-03-10T12:42:28.753887Z","created":"2026-03-10T12:42:28.753887Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm06","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:3300","nonce":0},{"type":"v1","addr":"192.168.123.106:6789","nonce":0}]},"addr":"192.168.123.106:6789/0","public_addr":"192.168.123.106:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:43:42.996 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: cephadm 2026-03-10T12:43:40.866161+0000 mgr.vm06.cofomf (mgr.14193) 18 : cephadm [INF] Deploying daemon node-exporter.vm09 on vm09 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: cephadm 2026-03-10T12:43:40.866161+0000 mgr.vm06.cofomf (mgr.14193) 18 : cephadm [INF] Deploying daemon node-exporter.vm09 on vm09 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.623195+0000 mon.vm06 (mon.0) 211 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.623195+0000 mon.vm06 (mon.0) 211 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.625525+0000 mon.vm06 (mon.0) 212 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.625525+0000 mon.vm06 (mon.0) 212 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.627718+0000 mon.vm06 (mon.0) 213 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.627718+0000 mon.vm06 (mon.0) 213 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.630744+0000 mon.vm06 (mon.0) 214 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.630744+0000 mon.vm06 (mon.0) 214 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.631944+0000 mon.vm06 (mon.0) 215 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.mcduck", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.631944+0000 mon.vm06 (mon.0) 215 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.mcduck", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.633323+0000 mon.vm06 (mon.0) 216 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm09.mcduck", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.633323+0000 mon.vm06 (mon.0) 216 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm09.mcduck", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.635825+0000 mon.vm06 (mon.0) 217 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.635825+0000 mon.vm06 (mon.0) 217 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:43:43.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.636754+0000 mon.vm06 (mon.0) 218 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.636754+0000 mon.vm06 (mon.0) 218 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: cephadm 2026-03-10T12:43:41.637548+0000 mgr.vm06.cofomf (mgr.14193) 19 : cephadm [INF] Deploying daemon mgr.vm09.mcduck on vm09 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: cephadm 2026-03-10T12:43:41.637548+0000 mgr.vm06.cofomf (mgr.14193) 19 : cephadm [INF] Deploying daemon mgr.vm09.mcduck on vm09 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.961715+0000 mon.vm06 (mon.0) 219 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:41.961715+0000 mon.vm06 (mon.0) 219 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:42.429132+0000 mon.vm06 (mon.0) 220 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:42.429132+0000 mon.vm06 (mon.0) 220 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:42.432291+0000 mon.vm06 (mon.0) 221 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:42.432291+0000 mon.vm06 (mon.0) 221 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:42.434966+0000 mon.vm06 (mon.0) 222 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:42.434966+0000 mon.vm06 (mon.0) 222 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:42.437506+0000 mon.vm06 (mon.0) 223 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:42.437506+0000 mon.vm06 (mon.0) 223 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:42.438663+0000 mon.vm06 (mon.0) 224 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:42.438663+0000 mon.vm06 (mon.0) 224 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:42.439553+0000 mon.vm06 (mon.0) 225 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:43.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:42 vm06 bash[17497]: audit 2026-03-10T12:43:42.439553+0000 mon.vm06 (mon.0) 225 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:44.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:43 vm06 bash[17497]: cephadm 2026-03-10T12:43:42.440418+0000 mgr.vm06.cofomf (mgr.14193) 20 : cephadm [INF] Deploying daemon mon.vm09 on vm09 2026-03-10T12:43:44.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:43 vm06 bash[17497]: cephadm 2026-03-10T12:43:42.440418+0000 mgr.vm06.cofomf (mgr.14193) 20 : cephadm [INF] Deploying daemon mon.vm09 on vm09 2026-03-10T12:43:44.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:43 vm06 bash[17497]: audit 2026-03-10T12:43:42.995957+0000 mon.vm06 (mon.0) 226 : audit [DBG] from='client.? 192.168.123.109:0/2188445947' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:43:44.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:43 vm06 bash[17497]: audit 2026-03-10T12:43:42.995957+0000 mon.vm06 (mon.0) 226 : audit [DBG] from='client.? 192.168.123.109:0/2188445947' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:43:44.303 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:43:44.303 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph mon dump -f json 2026-03-10T12:43:44.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:44.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:44.361 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:44.361 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:44.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:43:44.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 systemd[1]: Started Ceph mon.vm09 for 68e2be40-1c7e-11f1-b779-df2955349a39. 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.844+0000 7f0e48b34d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.844+0000 7f0e48b34d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.844+0000 7f0e48b34d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 0 load: jerasure load: lrc 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Git sha 0 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: DB SUMMARY 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: DB Session ID: RAZ2VX0RC1GCSTBH0MXN 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-vm09/store.db dir, Total Num: 0, files: 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm09/store.db: 000004.log size: 511 ; 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.env: 0x56437f7e8dc0 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.info_log: 0x564386e00b20 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T12:43:44.973 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.db_log_dir: 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.wal_dir: 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.write_buffer_manager: 0x564386e05900 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.row_cache: None 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.wal_filter: None 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T12:43:44.974 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Compression algorithms supported: 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: kZSTD supported: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm09/store.db/MANIFEST-000005 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.merge_operator: 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564386e006e0) 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: cache_index_and_filter_blocks: 1 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: pin_top_level_index_and_filter: 1 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: index_type: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: data_block_index_type: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: index_shortening: 1 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: checksum: 4 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: no_block_cache: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: block_cache: 0x564386e27350 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: block_cache_name: BinnedLRUCache 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: block_cache_options: 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: capacity : 536870912 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: num_shard_bits : 4 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: strict_capacity_limit : 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: high_pri_pool_ratio: 0.000 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: block_cache_compressed: (nil) 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: persistent_cache: (nil) 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: block_size: 4096 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: block_size_deviation: 10 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: block_restart_interval: 16 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: index_block_restart_interval: 1 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: metadata_block_size: 4096 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: partition_filters: 0 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: use_delta_encoding: 1 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: filter_policy: bloomfilter 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: whole_key_filtering: 1 2026-03-10T12:43:44.975 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: verify_compression: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: read_amp_bytes_per_bit: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: format_version: 5 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: enable_index_compression: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: block_align: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: max_auto_readahead_size: 262144 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: prepopulate_block_cache: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: initial_auto_readahead_size: 8192 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: num_file_reads_for_auto_readahead: 2 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.num_levels: 7 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T12:43:44.976 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm09/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7c4f763b-1a83-4f09-96d6-dbe38b14b546 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773146624855229, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773146624856117, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773146624, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7c4f763b-1a83-4f09-96d6-dbe38b14b546", "db_session_id": "RAZ2VX0RC1GCSTBH0MXN", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773146624856190, "job": 1, "event": "recovery_finished"} 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.848+0000 7f0e48b34d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.856+0000 7f0e48b34d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm09/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.856+0000 7f0e48b34d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564386e28e00 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.856+0000 7f0e48b34d80 4 rocksdb: DB pointer 0x564386f42000 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.860+0000 7f0e48b34d80 0 mon.vm09 does not exist in monmap, will attempt to join an existing cluster 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.860+0000 7f0e48b34d80 0 using public_addr v2:192.168.123.109:0/0 -> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.860+0000 7f0e48b34d80 0 starting mon.vm09 rank -1 at public addrs [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] at bind addrs [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon_data /var/lib/ceph/mon/ceph-vm09 fsid 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.860+0000 7f0e48b34d80 1 mon.vm09@-1(???) e0 preinit fsid 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.860+0000 7f0e3e8fe640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.860+0000 7f0e3e8fe640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: ** DB Stats ** 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: ** Compaction Stats [default] ** 2026-03-10T12:43:44.977 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: ** Compaction Stats [default] ** 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: AddFile(Keys): cumulative 0, interval 0 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Block cache BinnedLRUCache@0x564386e27350#7 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-06 secs_since: 0 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: ** File Read Latency Histogram By Level [default] ** 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 0 mon.vm09@-1(synchronizing).mds e1 new map 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 0 mon.vm09@-1(synchronizing).mds e1 print_map 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: e1 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: btime 2026-03-10T12:42:30:013516+0000 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: legacy client fscid: -1 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: No filesystems configured 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 1 mon.vm09@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 1 mon.vm09@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 1 mon.vm09@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 1 mon.vm09@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 1 mon.vm09@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 1 mon.vm09@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 1 mon.vm09@-1(synchronizing).osd e5 e5: 0 total, 0 up, 0 in 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 0 mon.vm09@-1(synchronizing).osd e5 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 0 mon.vm09@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 0 mon.vm09@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.908+0000 7f0e41904640 0 mon.vm09@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: cephadm 2026-03-10T12:43:42.440418+0000 mgr.vm06.cofomf (mgr.14193) 20 : cephadm [INF] Deploying daemon mon.vm09 on vm09 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: cephadm 2026-03-10T12:43:42.440418+0000 mgr.vm06.cofomf (mgr.14193) 20 : cephadm [INF] Deploying daemon mon.vm09 on vm09 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: audit 2026-03-10T12:43:42.995957+0000 mon.vm06 (mon.0) 226 : audit [DBG] from='client.? 192.168.123.109:0/2188445947' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: audit 2026-03-10T12:43:42.995957+0000 mon.vm06 (mon.0) 226 : audit [DBG] from='client.? 192.168.123.109:0/2188445947' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:43:44.978 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:44 vm09 bash[21409]: debug 2026-03-10T12:43:44.912+0000 7f0e41904640 1 mon.vm09@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3 2026-03-10T12:43:49.036 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm09/config 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:49 vm06 bash[17497]: audit 2026-03-10T12:43:44.931000+0000 mon.vm06 (mon.0) 234 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:49 vm06 bash[17497]: audit 2026-03-10T12:43:44.931000+0000 mon.vm06 (mon.0) 234 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:49 vm06 bash[17497]: audit 2026-03-10T12:43:44.931079+0000 mon.vm06 (mon.0) 235 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:49 vm06 bash[17497]: audit 2026-03-10T12:43:44.931079+0000 mon.vm06 (mon.0) 235 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:49 vm06 bash[17497]: cluster 2026-03-10T12:43:44.931287+0000 mon.vm06 (mon.0) 236 : cluster [INF] mon.vm06 calling monitor election 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:49 vm06 bash[17497]: cluster 2026-03-10T12:43:44.931287+0000 mon.vm06 (mon.0) 236 : cluster [INF] mon.vm06 calling monitor election 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:49 vm06 bash[17497]: audit 2026-03-10T12:43:45.920939+0000 mon.vm06 (mon.0) 237 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:49 vm06 bash[17497]: audit 2026-03-10T12:43:45.920939+0000 mon.vm06 (mon.0) 237 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:49 vm06 bash[17497]: audit 2026-03-10T12:43:46.368786+0000 mon.vm06 (mon.0) 238 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.mcduck/crt"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:49 vm06 bash[17497]: audit 2026-03-10T12:43:46.368786+0000 mon.vm06 (mon.0) 238 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.mcduck/crt"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:46.921687+0000 mon.vm09 (mon.1) 1 : cluster [INF] mon.vm09 calling monitor election 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:46.921687+0000 mon.vm09 (mon.1) 1 : cluster [INF] mon.vm09 calling monitor election 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:46.921771+0000 mon.vm06 (mon.0) 239 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:46.921771+0000 mon.vm06 (mon.0) 239 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:46.988887+0000 mon.vm06 (mon.0) 240 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:46.988887+0000 mon.vm06 (mon.0) 240 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:47.921094+0000 mon.vm06 (mon.0) 241 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:47.921094+0000 mon.vm06 (mon.0) 241 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:48.921759+0000 mon.vm06 (mon.0) 242 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:48.921759+0000 mon.vm06 (mon.0) 242 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.921403+0000 mon.vm06 (mon.0) 243 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.921403+0000 mon.vm06 (mon.0) 243 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.936030+0000 mon.vm06 (mon.0) 244 : cluster [INF] mon.vm06 is new leader, mons vm06,vm09 in quorum (ranks 0,1) 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.936030+0000 mon.vm06 (mon.0) 244 : cluster [INF] mon.vm06 is new leader, mons vm06,vm09 in quorum (ranks 0,1) 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939771+0000 mon.vm06 (mon.0) 245 : cluster [DBG] monmap epoch 2 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939771+0000 mon.vm06 (mon.0) 245 : cluster [DBG] monmap epoch 2 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939793+0000 mon.vm06 (mon.0) 246 : cluster [DBG] fsid 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939793+0000 mon.vm06 (mon.0) 246 : cluster [DBG] fsid 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939852+0000 mon.vm06 (mon.0) 247 : cluster [DBG] last_changed 2026-03-10T12:43:44.928120+0000 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939852+0000 mon.vm06 (mon.0) 247 : cluster [DBG] last_changed 2026-03-10T12:43:44.928120+0000 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939862+0000 mon.vm06 (mon.0) 248 : cluster [DBG] created 2026-03-10T12:42:28.753887+0000 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939862+0000 mon.vm06 (mon.0) 248 : cluster [DBG] created 2026-03-10T12:42:28.753887+0000 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939871+0000 mon.vm06 (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939871+0000 mon.vm06 (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939884+0000 mon.vm06 (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939884+0000 mon.vm06 (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939937+0000 mon.vm06 (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.vm06 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939937+0000 mon.vm06 (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.vm06 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939974+0000 mon.vm06 (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.939974+0000 mon.vm06 (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.940352+0000 mon.vm06 (mon.0) 253 : cluster [DBG] fsmap 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.940352+0000 mon.vm06 (mon.0) 253 : cluster [DBG] fsmap 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.940398+0000 mon.vm06 (mon.0) 254 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.940398+0000 mon.vm06 (mon.0) 254 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.940553+0000 mon.vm06 (mon.0) 255 : cluster [DBG] mgrmap e16: vm06.cofomf(active, since 18s) 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.940553+0000 mon.vm06 (mon.0) 255 : cluster [DBG] mgrmap e16: vm06.cofomf(active, since 18s) 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.940662+0000 mon.vm06 (mon.0) 256 : cluster [INF] overall HEALTH_OK 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.940662+0000 mon.vm06 (mon.0) 256 : cluster [INF] overall HEALTH_OK 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.940962+0000 mon.vm06 (mon.0) 257 : cluster [DBG] Standby manager daemon vm09.mcduck started 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: cluster 2026-03-10T12:43:49.940962+0000 mon.vm06 (mon.0) 257 : cluster [DBG] Standby manager daemon vm09.mcduck started 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.943097+0000 mon.vm06 (mon.0) 258 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.943097+0000 mon.vm06 (mon.0) 258 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.944315+0000 mon.vm06 (mon.0) 259 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.944315+0000 mon.vm06 (mon.0) 259 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.947414+0000 mon.vm06 (mon.0) 260 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.mcduck/key"}]: dispatch 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.947414+0000 mon.vm06 (mon.0) 260 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.mcduck/key"}]: dispatch 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.947729+0000 mon.vm06 (mon.0) 261 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.947729+0000 mon.vm06 (mon.0) 261 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.949326+0000 mon.vm06 (mon.0) 262 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.949326+0000 mon.vm06 (mon.0) 262 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.957871+0000 mon.vm06 (mon.0) 263 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.957871+0000 mon.vm06 (mon.0) 263 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.958552+0000 mon.vm06 (mon.0) 264 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.958552+0000 mon.vm06 (mon.0) 264 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.959180+0000 mon.vm06 (mon.0) 265 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:43:50.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:50 vm06 bash[17497]: audit 2026-03-10T12:43:49.959180+0000 mon.vm06 (mon.0) 265 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:44.931000+0000 mon.vm06 (mon.0) 234 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:44.931000+0000 mon.vm06 (mon.0) 234 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:44.931079+0000 mon.vm06 (mon.0) 235 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:44.931079+0000 mon.vm06 (mon.0) 235 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:44.931287+0000 mon.vm06 (mon.0) 236 : cluster [INF] mon.vm06 calling monitor election 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:44.931287+0000 mon.vm06 (mon.0) 236 : cluster [INF] mon.vm06 calling monitor election 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:45.920939+0000 mon.vm06 (mon.0) 237 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:45.920939+0000 mon.vm06 (mon.0) 237 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:46.368786+0000 mon.vm06 (mon.0) 238 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.mcduck/crt"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:46.368786+0000 mon.vm06 (mon.0) 238 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.mcduck/crt"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:46.921687+0000 mon.vm09 (mon.1) 1 : cluster [INF] mon.vm09 calling monitor election 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:46.921687+0000 mon.vm09 (mon.1) 1 : cluster [INF] mon.vm09 calling monitor election 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:46.921771+0000 mon.vm06 (mon.0) 239 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:46.921771+0000 mon.vm06 (mon.0) 239 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:46.988887+0000 mon.vm06 (mon.0) 240 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:46.988887+0000 mon.vm06 (mon.0) 240 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:47.921094+0000 mon.vm06 (mon.0) 241 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:47.921094+0000 mon.vm06 (mon.0) 241 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:48.921759+0000 mon.vm06 (mon.0) 242 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:48.921759+0000 mon.vm06 (mon.0) 242 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:49.921403+0000 mon.vm06 (mon.0) 243 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:49.921403+0000 mon.vm06 (mon.0) 243 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.936030+0000 mon.vm06 (mon.0) 244 : cluster [INF] mon.vm06 is new leader, mons vm06,vm09 in quorum (ranks 0,1) 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.936030+0000 mon.vm06 (mon.0) 244 : cluster [INF] mon.vm06 is new leader, mons vm06,vm09 in quorum (ranks 0,1) 2026-03-10T12:43:50.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939771+0000 mon.vm06 (mon.0) 245 : cluster [DBG] monmap epoch 2 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939771+0000 mon.vm06 (mon.0) 245 : cluster [DBG] monmap epoch 2 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939793+0000 mon.vm06 (mon.0) 246 : cluster [DBG] fsid 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939793+0000 mon.vm06 (mon.0) 246 : cluster [DBG] fsid 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939852+0000 mon.vm06 (mon.0) 247 : cluster [DBG] last_changed 2026-03-10T12:43:44.928120+0000 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939852+0000 mon.vm06 (mon.0) 247 : cluster [DBG] last_changed 2026-03-10T12:43:44.928120+0000 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939862+0000 mon.vm06 (mon.0) 248 : cluster [DBG] created 2026-03-10T12:42:28.753887+0000 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939862+0000 mon.vm06 (mon.0) 248 : cluster [DBG] created 2026-03-10T12:42:28.753887+0000 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939871+0000 mon.vm06 (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939871+0000 mon.vm06 (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939884+0000 mon.vm06 (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939884+0000 mon.vm06 (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939937+0000 mon.vm06 (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.vm06 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939937+0000 mon.vm06 (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.vm06 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939974+0000 mon.vm06 (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.939974+0000 mon.vm06 (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.940352+0000 mon.vm06 (mon.0) 253 : cluster [DBG] fsmap 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.940352+0000 mon.vm06 (mon.0) 253 : cluster [DBG] fsmap 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.940398+0000 mon.vm06 (mon.0) 254 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.940398+0000 mon.vm06 (mon.0) 254 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.940553+0000 mon.vm06 (mon.0) 255 : cluster [DBG] mgrmap e16: vm06.cofomf(active, since 18s) 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.940553+0000 mon.vm06 (mon.0) 255 : cluster [DBG] mgrmap e16: vm06.cofomf(active, since 18s) 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.940662+0000 mon.vm06 (mon.0) 256 : cluster [INF] overall HEALTH_OK 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.940662+0000 mon.vm06 (mon.0) 256 : cluster [INF] overall HEALTH_OK 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.940962+0000 mon.vm06 (mon.0) 257 : cluster [DBG] Standby manager daemon vm09.mcduck started 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: cluster 2026-03-10T12:43:49.940962+0000 mon.vm06 (mon.0) 257 : cluster [DBG] Standby manager daemon vm09.mcduck started 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:49.943097+0000 mon.vm06 (mon.0) 258 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:49.943097+0000 mon.vm06 (mon.0) 258 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:49.944315+0000 mon.vm06 (mon.0) 259 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:49.944315+0000 mon.vm06 (mon.0) 259 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:49.947414+0000 mon.vm06 (mon.0) 260 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.mcduck/key"}]: dispatch 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:49.947414+0000 mon.vm06 (mon.0) 260 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.mcduck/key"}]: dispatch 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:49.947729+0000 mon.vm06 (mon.0) 261 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:49 vm09 bash[21409]: audit 2026-03-10T12:43:49.947729+0000 mon.vm06 (mon.0) 261 : audit [DBG] from='mgr.? 192.168.123.109:0/697830854' entity='mgr.vm09.mcduck' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:50 vm09 bash[21409]: audit 2026-03-10T12:43:49.949326+0000 mon.vm06 (mon.0) 262 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:50 vm09 bash[21409]: audit 2026-03-10T12:43:49.949326+0000 mon.vm06 (mon.0) 262 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:50.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:50 vm09 bash[21409]: audit 2026-03-10T12:43:49.957871+0000 mon.vm06 (mon.0) 263 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:50.361 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:50 vm09 bash[21409]: audit 2026-03-10T12:43:49.957871+0000 mon.vm06 (mon.0) 263 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:50.361 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:50 vm09 bash[21409]: audit 2026-03-10T12:43:49.958552+0000 mon.vm06 (mon.0) 264 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:50.361 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:50 vm09 bash[21409]: audit 2026-03-10T12:43:49.958552+0000 mon.vm06 (mon.0) 264 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:50.361 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:50 vm09 bash[21409]: audit 2026-03-10T12:43:49.959180+0000 mon.vm06 (mon.0) 265 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:43:50.361 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:50 vm09 bash[21409]: audit 2026-03-10T12:43:49.959180+0000 mon.vm06 (mon.0) 265 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:43:50.774 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 2 2026-03-10T12:43:50.775 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:43:50.775 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":2,"fsid":"68e2be40-1c7e-11f1-b779-df2955349a39","modified":"2026-03-10T12:43:44.928120Z","created":"2026-03-10T12:42:28.753887Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm06","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:3300","nonce":0},{"type":"v1","addr":"192.168.123.106:6789","nonce":0}]},"addr":"192.168.123.106:6789/0","public_addr":"192.168.123.106:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"vm09","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T12:43:50.851 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T12:43:50.851 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph config generate-minimal-conf 2026-03-10T12:43:51.181 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cephadm 2026-03-10T12:43:49.959888+0000 mgr.vm06.cofomf (mgr.14193) 21 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-10T12:43:51.181 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cephadm 2026-03-10T12:43:49.959888+0000 mgr.vm06.cofomf (mgr.14193) 21 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-10T12:43:51.181 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cephadm 2026-03-10T12:43:49.960001+0000 mgr.vm06.cofomf (mgr.14193) 22 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-10T12:43:51.181 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cephadm 2026-03-10T12:43:49.960001+0000 mgr.vm06.cofomf (mgr.14193) 22 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-10T12:43:51.181 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cephadm 2026-03-10T12:43:50.014706+0000 mgr.vm06.cofomf (mgr.14193) 23 : cephadm [INF] Updating vm06:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:51.181 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cephadm 2026-03-10T12:43:50.014706+0000 mgr.vm06.cofomf (mgr.14193) 23 : cephadm [INF] Updating vm06:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:51.181 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cephadm 2026-03-10T12:43:50.020880+0000 mgr.vm06.cofomf (mgr.14193) 24 : cephadm [INF] Updating vm09:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cephadm 2026-03-10T12:43:50.020880+0000 mgr.vm06.cofomf (mgr.14193) 24 : cephadm [INF] Updating vm09:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cluster 2026-03-10T12:43:50.036639+0000 mon.vm06 (mon.0) 266 : cluster [DBG] mgrmap e17: vm06.cofomf(active, since 18s), standbys: vm09.mcduck 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cluster 2026-03-10T12:43:50.036639+0000 mon.vm06 (mon.0) 266 : cluster [DBG] mgrmap e17: vm06.cofomf(active, since 18s), standbys: vm09.mcduck 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.038766+0000 mon.vm06 (mon.0) 267 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr metadata", "who": "vm09.mcduck", "id": "vm09.mcduck"}]: dispatch 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.038766+0000 mon.vm06 (mon.0) 267 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr metadata", "who": "vm09.mcduck", "id": "vm09.mcduck"}]: dispatch 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.066271+0000 mon.vm06 (mon.0) 268 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.066271+0000 mon.vm06 (mon.0) 268 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.070756+0000 mon.vm06 (mon.0) 269 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.070756+0000 mon.vm06 (mon.0) 269 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.074347+0000 mon.vm06 (mon.0) 270 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.074347+0000 mon.vm06 (mon.0) 270 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.077871+0000 mon.vm06 (mon.0) 271 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.077871+0000 mon.vm06 (mon.0) 271 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.081117+0000 mon.vm06 (mon.0) 272 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.081117+0000 mon.vm06 (mon.0) 272 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cephadm 2026-03-10T12:43:50.092280+0000 mgr.vm06.cofomf (mgr.14193) 25 : cephadm [INF] Reconfiguring grafana.vm06 (dependencies changed)... 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cephadm 2026-03-10T12:43:50.092280+0000 mgr.vm06.cofomf (mgr.14193) 25 : cephadm [INF] Reconfiguring grafana.vm06 (dependencies changed)... 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cephadm 2026-03-10T12:43:50.128532+0000 mgr.vm06.cofomf (mgr.14193) 26 : cephadm [INF] Reconfiguring daemon grafana.vm06 on vm06 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: cephadm 2026-03-10T12:43:50.128532+0000 mgr.vm06.cofomf (mgr.14193) 26 : cephadm [INF] Reconfiguring daemon grafana.vm06 on vm06 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.768967+0000 mon.vm06 (mon.0) 273 : audit [DBG] from='client.? 192.168.123.109:0/353391551' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.768967+0000 mon.vm06 (mon.0) 273 : audit [DBG] from='client.? 192.168.123.109:0/353391551' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.806198+0000 mon.vm06 (mon.0) 274 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.806198+0000 mon.vm06 (mon.0) 274 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.818940+0000 mon.vm06 (mon.0) 275 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.818940+0000 mon.vm06 (mon.0) 275 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.921592+0000 mon.vm06 (mon.0) 276 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:51.182 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:51 vm06 bash[17497]: audit 2026-03-10T12:43:50.921592+0000 mon.vm06 (mon.0) 276 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cephadm 2026-03-10T12:43:49.959888+0000 mgr.vm06.cofomf (mgr.14193) 21 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cephadm 2026-03-10T12:43:49.959888+0000 mgr.vm06.cofomf (mgr.14193) 21 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cephadm 2026-03-10T12:43:49.960001+0000 mgr.vm06.cofomf (mgr.14193) 22 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cephadm 2026-03-10T12:43:49.960001+0000 mgr.vm06.cofomf (mgr.14193) 22 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cephadm 2026-03-10T12:43:50.014706+0000 mgr.vm06.cofomf (mgr.14193) 23 : cephadm [INF] Updating vm06:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cephadm 2026-03-10T12:43:50.014706+0000 mgr.vm06.cofomf (mgr.14193) 23 : cephadm [INF] Updating vm06:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cephadm 2026-03-10T12:43:50.020880+0000 mgr.vm06.cofomf (mgr.14193) 24 : cephadm [INF] Updating vm09:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cephadm 2026-03-10T12:43:50.020880+0000 mgr.vm06.cofomf (mgr.14193) 24 : cephadm [INF] Updating vm09:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/config/ceph.conf 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cluster 2026-03-10T12:43:50.036639+0000 mon.vm06 (mon.0) 266 : cluster [DBG] mgrmap e17: vm06.cofomf(active, since 18s), standbys: vm09.mcduck 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cluster 2026-03-10T12:43:50.036639+0000 mon.vm06 (mon.0) 266 : cluster [DBG] mgrmap e17: vm06.cofomf(active, since 18s), standbys: vm09.mcduck 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.038766+0000 mon.vm06 (mon.0) 267 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr metadata", "who": "vm09.mcduck", "id": "vm09.mcduck"}]: dispatch 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.038766+0000 mon.vm06 (mon.0) 267 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr metadata", "who": "vm09.mcduck", "id": "vm09.mcduck"}]: dispatch 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.066271+0000 mon.vm06 (mon.0) 268 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.066271+0000 mon.vm06 (mon.0) 268 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.070756+0000 mon.vm06 (mon.0) 269 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.070756+0000 mon.vm06 (mon.0) 269 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.074347+0000 mon.vm06 (mon.0) 270 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.074347+0000 mon.vm06 (mon.0) 270 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.077871+0000 mon.vm06 (mon.0) 271 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.077871+0000 mon.vm06 (mon.0) 271 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.081117+0000 mon.vm06 (mon.0) 272 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.081117+0000 mon.vm06 (mon.0) 272 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cephadm 2026-03-10T12:43:50.092280+0000 mgr.vm06.cofomf (mgr.14193) 25 : cephadm [INF] Reconfiguring grafana.vm06 (dependencies changed)... 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cephadm 2026-03-10T12:43:50.092280+0000 mgr.vm06.cofomf (mgr.14193) 25 : cephadm [INF] Reconfiguring grafana.vm06 (dependencies changed)... 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cephadm 2026-03-10T12:43:50.128532+0000 mgr.vm06.cofomf (mgr.14193) 26 : cephadm [INF] Reconfiguring daemon grafana.vm06 on vm06 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: cephadm 2026-03-10T12:43:50.128532+0000 mgr.vm06.cofomf (mgr.14193) 26 : cephadm [INF] Reconfiguring daemon grafana.vm06 on vm06 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.768967+0000 mon.vm06 (mon.0) 273 : audit [DBG] from='client.? 192.168.123.109:0/353391551' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.768967+0000 mon.vm06 (mon.0) 273 : audit [DBG] from='client.? 192.168.123.109:0/353391551' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:43:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.806198+0000 mon.vm06 (mon.0) 274 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.610 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.806198+0000 mon.vm06 (mon.0) 274 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.610 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.818940+0000 mon.vm06 (mon.0) 275 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.610 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.818940+0000 mon.vm06 (mon.0) 275 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:51.610 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.921592+0000 mon.vm06 (mon.0) 276 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:51.610 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:51 vm09 bash[21409]: audit 2026-03-10T12:43:50.921592+0000 mon.vm06 (mon.0) 276 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:43:52.538 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:52 vm06 bash[17497]: cephadm 2026-03-10T12:43:50.820482+0000 mgr.vm06.cofomf (mgr.14193) 27 : cephadm [INF] Reconfiguring alertmanager.vm06 (dependencies changed)... 2026-03-10T12:43:52.538 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:52 vm06 bash[17497]: cephadm 2026-03-10T12:43:50.820482+0000 mgr.vm06.cofomf (mgr.14193) 27 : cephadm [INF] Reconfiguring alertmanager.vm06 (dependencies changed)... 2026-03-10T12:43:52.538 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:52 vm06 bash[17497]: cephadm 2026-03-10T12:43:50.826112+0000 mgr.vm06.cofomf (mgr.14193) 28 : cephadm [INF] Reconfiguring daemon alertmanager.vm06 on vm06 2026-03-10T12:43:52.538 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:52 vm06 bash[17497]: cephadm 2026-03-10T12:43:50.826112+0000 mgr.vm06.cofomf (mgr.14193) 28 : cephadm [INF] Reconfiguring daemon alertmanager.vm06 on vm06 2026-03-10T12:43:52.538 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:52 vm06 bash[17497]: audit 2026-03-10T12:43:52.085071+0000 mon.vm06 (mon.0) 277 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:52.538 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:52 vm06 bash[17497]: audit 2026-03-10T12:43:52.085071+0000 mon.vm06 (mon.0) 277 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:52.538 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:52 vm06 bash[17497]: audit 2026-03-10T12:43:52.090399+0000 mon.vm06 (mon.0) 278 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:52.538 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:52 vm06 bash[17497]: audit 2026-03-10T12:43:52.090399+0000 mon.vm06 (mon.0) 278 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:52.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:52 vm09 bash[21409]: cephadm 2026-03-10T12:43:50.820482+0000 mgr.vm06.cofomf (mgr.14193) 27 : cephadm [INF] Reconfiguring alertmanager.vm06 (dependencies changed)... 2026-03-10T12:43:52.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:52 vm09 bash[21409]: cephadm 2026-03-10T12:43:50.820482+0000 mgr.vm06.cofomf (mgr.14193) 27 : cephadm [INF] Reconfiguring alertmanager.vm06 (dependencies changed)... 2026-03-10T12:43:52.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:52 vm09 bash[21409]: cephadm 2026-03-10T12:43:50.826112+0000 mgr.vm06.cofomf (mgr.14193) 28 : cephadm [INF] Reconfiguring daemon alertmanager.vm06 on vm06 2026-03-10T12:43:52.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:52 vm09 bash[21409]: cephadm 2026-03-10T12:43:50.826112+0000 mgr.vm06.cofomf (mgr.14193) 28 : cephadm [INF] Reconfiguring daemon alertmanager.vm06 on vm06 2026-03-10T12:43:52.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:52 vm09 bash[21409]: audit 2026-03-10T12:43:52.085071+0000 mon.vm06 (mon.0) 277 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:52.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:52 vm09 bash[21409]: audit 2026-03-10T12:43:52.085071+0000 mon.vm06 (mon.0) 277 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:52.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:52 vm09 bash[21409]: audit 2026-03-10T12:43:52.090399+0000 mon.vm06 (mon.0) 278 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:52.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:52 vm09 bash[21409]: audit 2026-03-10T12:43:52.090399+0000 mon.vm06 (mon.0) 278 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: cluster 2026-03-10T12:43:51.933701+0000 mgr.vm06.cofomf (mgr.14193) 29 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: cluster 2026-03-10T12:43:51.933701+0000 mgr.vm06.cofomf (mgr.14193) 29 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: cephadm 2026-03-10T12:43:52.091211+0000 mgr.vm06.cofomf (mgr.14193) 30 : cephadm [INF] Reconfiguring prometheus.vm06 (dependencies changed)... 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: cephadm 2026-03-10T12:43:52.091211+0000 mgr.vm06.cofomf (mgr.14193) 30 : cephadm [INF] Reconfiguring prometheus.vm06 (dependencies changed)... 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: cephadm 2026-03-10T12:43:52.269160+0000 mgr.vm06.cofomf (mgr.14193) 31 : cephadm [INF] Reconfiguring daemon prometheus.vm06 on vm06 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: cephadm 2026-03-10T12:43:52.269160+0000 mgr.vm06.cofomf (mgr.14193) 31 : cephadm [INF] Reconfiguring daemon prometheus.vm06 on vm06 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: audit 2026-03-10T12:43:52.890616+0000 mon.vm06 (mon.0) 279 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: audit 2026-03-10T12:43:52.890616+0000 mon.vm06 (mon.0) 279 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: audit 2026-03-10T12:43:52.896449+0000 mon.vm06 (mon.0) 280 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: audit 2026-03-10T12:43:52.896449+0000 mon.vm06 (mon.0) 280 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: audit 2026-03-10T12:43:52.897683+0000 mon.vm06 (mon.0) 281 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: audit 2026-03-10T12:43:52.897683+0000 mon.vm06 (mon.0) 281 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: audit 2026-03-10T12:43:52.898234+0000 mon.vm06 (mon.0) 282 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: audit 2026-03-10T12:43:52.898234+0000 mon.vm06 (mon.0) 282 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: audit 2026-03-10T12:43:52.898653+0000 mon.vm06 (mon.0) 283 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:53.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:53 vm06 bash[17497]: audit 2026-03-10T12:43:52.898653+0000 mon.vm06 (mon.0) 283 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: cluster 2026-03-10T12:43:51.933701+0000 mgr.vm06.cofomf (mgr.14193) 29 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: cluster 2026-03-10T12:43:51.933701+0000 mgr.vm06.cofomf (mgr.14193) 29 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: cephadm 2026-03-10T12:43:52.091211+0000 mgr.vm06.cofomf (mgr.14193) 30 : cephadm [INF] Reconfiguring prometheus.vm06 (dependencies changed)... 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: cephadm 2026-03-10T12:43:52.091211+0000 mgr.vm06.cofomf (mgr.14193) 30 : cephadm [INF] Reconfiguring prometheus.vm06 (dependencies changed)... 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: cephadm 2026-03-10T12:43:52.269160+0000 mgr.vm06.cofomf (mgr.14193) 31 : cephadm [INF] Reconfiguring daemon prometheus.vm06 on vm06 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: cephadm 2026-03-10T12:43:52.269160+0000 mgr.vm06.cofomf (mgr.14193) 31 : cephadm [INF] Reconfiguring daemon prometheus.vm06 on vm06 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: audit 2026-03-10T12:43:52.890616+0000 mon.vm06 (mon.0) 279 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: audit 2026-03-10T12:43:52.890616+0000 mon.vm06 (mon.0) 279 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: audit 2026-03-10T12:43:52.896449+0000 mon.vm06 (mon.0) 280 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: audit 2026-03-10T12:43:52.896449+0000 mon.vm06 (mon.0) 280 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: audit 2026-03-10T12:43:52.897683+0000 mon.vm06 (mon.0) 281 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: audit 2026-03-10T12:43:52.897683+0000 mon.vm06 (mon.0) 281 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: audit 2026-03-10T12:43:52.898234+0000 mon.vm06 (mon.0) 282 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: audit 2026-03-10T12:43:52.898234+0000 mon.vm06 (mon.0) 282 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: audit 2026-03-10T12:43:52.898653+0000 mon.vm06 (mon.0) 283 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:53 vm09 bash[21409]: audit 2026-03-10T12:43:52.898653+0000 mon.vm06 (mon.0) 283 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: cephadm 2026-03-10T12:43:52.897414+0000 mgr.vm06.cofomf (mgr.14193) 32 : cephadm [INF] Reconfiguring mon.vm06 (unknown last config time)... 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: cephadm 2026-03-10T12:43:52.897414+0000 mgr.vm06.cofomf (mgr.14193) 32 : cephadm [INF] Reconfiguring mon.vm06 (unknown last config time)... 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: cephadm 2026-03-10T12:43:52.899317+0000 mgr.vm06.cofomf (mgr.14193) 33 : cephadm [INF] Reconfiguring daemon mon.vm06 on vm06 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: cephadm 2026-03-10T12:43:52.899317+0000 mgr.vm06.cofomf (mgr.14193) 33 : cephadm [INF] Reconfiguring daemon mon.vm06 on vm06 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.309941+0000 mon.vm06 (mon.0) 284 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.309941+0000 mon.vm06 (mon.0) 284 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.313957+0000 mon.vm06 (mon.0) 285 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.313957+0000 mon.vm06 (mon.0) 285 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: cephadm 2026-03-10T12:43:53.314557+0000 mgr.vm06.cofomf (mgr.14193) 34 : cephadm [INF] Reconfiguring ceph-exporter.vm06 (monmap changed)... 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: cephadm 2026-03-10T12:43:53.314557+0000 mgr.vm06.cofomf (mgr.14193) 34 : cephadm [INF] Reconfiguring ceph-exporter.vm06 (monmap changed)... 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.314769+0000 mon.vm06 (mon.0) 286 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm06", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.314769+0000 mon.vm06 (mon.0) 286 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm06", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.315325+0000 mon.vm06 (mon.0) 287 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.315325+0000 mon.vm06 (mon.0) 287 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: cephadm 2026-03-10T12:43:53.315884+0000 mgr.vm06.cofomf (mgr.14193) 35 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm06 on vm06 2026-03-10T12:43:54.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: cephadm 2026-03-10T12:43:53.315884+0000 mgr.vm06.cofomf (mgr.14193) 35 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm06 on vm06 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.717559+0000 mon.vm06 (mon.0) 288 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.717559+0000 mon.vm06 (mon.0) 288 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.723539+0000 mon.vm06 (mon.0) 289 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.723539+0000 mon.vm06 (mon.0) 289 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.725009+0000 mon.vm06 (mon.0) 290 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm06", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.725009+0000 mon.vm06 (mon.0) 290 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm06", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.725760+0000 mon.vm06 (mon.0) 291 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:53.725760+0000 mon.vm06 (mon.0) 291 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:54.125617+0000 mon.vm06 (mon.0) 292 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:54.125617+0000 mon.vm06 (mon.0) 292 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:54.129713+0000 mon.vm06 (mon.0) 293 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:54.129713+0000 mon.vm06 (mon.0) 293 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:54.130602+0000 mon.vm06 (mon.0) 294 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm06.cofomf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:54.130602+0000 mon.vm06 (mon.0) 294 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm06.cofomf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:54.131171+0000 mon.vm06 (mon.0) 295 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:54.131171+0000 mon.vm06 (mon.0) 295 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:54.131619+0000 mon.vm06 (mon.0) 296 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:54.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:54 vm06 bash[17497]: audit 2026-03-10T12:43:54.131619+0000 mon.vm06 (mon.0) 296 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: cephadm 2026-03-10T12:43:52.897414+0000 mgr.vm06.cofomf (mgr.14193) 32 : cephadm [INF] Reconfiguring mon.vm06 (unknown last config time)... 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: cephadm 2026-03-10T12:43:52.897414+0000 mgr.vm06.cofomf (mgr.14193) 32 : cephadm [INF] Reconfiguring mon.vm06 (unknown last config time)... 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: cephadm 2026-03-10T12:43:52.899317+0000 mgr.vm06.cofomf (mgr.14193) 33 : cephadm [INF] Reconfiguring daemon mon.vm06 on vm06 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: cephadm 2026-03-10T12:43:52.899317+0000 mgr.vm06.cofomf (mgr.14193) 33 : cephadm [INF] Reconfiguring daemon mon.vm06 on vm06 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.309941+0000 mon.vm06 (mon.0) 284 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.309941+0000 mon.vm06 (mon.0) 284 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.313957+0000 mon.vm06 (mon.0) 285 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.313957+0000 mon.vm06 (mon.0) 285 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: cephadm 2026-03-10T12:43:53.314557+0000 mgr.vm06.cofomf (mgr.14193) 34 : cephadm [INF] Reconfiguring ceph-exporter.vm06 (monmap changed)... 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: cephadm 2026-03-10T12:43:53.314557+0000 mgr.vm06.cofomf (mgr.14193) 34 : cephadm [INF] Reconfiguring ceph-exporter.vm06 (monmap changed)... 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.314769+0000 mon.vm06 (mon.0) 286 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm06", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.314769+0000 mon.vm06 (mon.0) 286 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm06", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.315325+0000 mon.vm06 (mon.0) 287 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.315325+0000 mon.vm06 (mon.0) 287 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: cephadm 2026-03-10T12:43:53.315884+0000 mgr.vm06.cofomf (mgr.14193) 35 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm06 on vm06 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: cephadm 2026-03-10T12:43:53.315884+0000 mgr.vm06.cofomf (mgr.14193) 35 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm06 on vm06 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.717559+0000 mon.vm06 (mon.0) 288 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.717559+0000 mon.vm06 (mon.0) 288 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.723539+0000 mon.vm06 (mon.0) 289 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.723539+0000 mon.vm06 (mon.0) 289 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.725009+0000 mon.vm06 (mon.0) 290 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm06", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.725009+0000 mon.vm06 (mon.0) 290 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm06", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.725760+0000 mon.vm06 (mon.0) 291 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:53.725760+0000 mon.vm06 (mon.0) 291 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:54.125617+0000 mon.vm06 (mon.0) 292 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:54.125617+0000 mon.vm06 (mon.0) 292 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:54.129713+0000 mon.vm06 (mon.0) 293 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:54.129713+0000 mon.vm06 (mon.0) 293 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:54.130602+0000 mon.vm06 (mon.0) 294 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm06.cofomf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:54.130602+0000 mon.vm06 (mon.0) 294 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm06.cofomf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:54.131171+0000 mon.vm06 (mon.0) 295 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:54.131171+0000 mon.vm06 (mon.0) 295 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:54.131619+0000 mon.vm06 (mon.0) 296 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:54.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:54 vm09 bash[21409]: audit 2026-03-10T12:43:54.131619+0000 mon.vm06 (mon.0) 296 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cephadm 2026-03-10T12:43:53.724728+0000 mgr.vm06.cofomf (mgr.14193) 36 : cephadm [INF] Reconfiguring crash.vm06 (monmap changed)... 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cephadm 2026-03-10T12:43:53.724728+0000 mgr.vm06.cofomf (mgr.14193) 36 : cephadm [INF] Reconfiguring crash.vm06 (monmap changed)... 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cephadm 2026-03-10T12:43:53.726404+0000 mgr.vm06.cofomf (mgr.14193) 37 : cephadm [INF] Reconfiguring daemon crash.vm06 on vm06 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cephadm 2026-03-10T12:43:53.726404+0000 mgr.vm06.cofomf (mgr.14193) 37 : cephadm [INF] Reconfiguring daemon crash.vm06 on vm06 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cluster 2026-03-10T12:43:53.933916+0000 mgr.vm06.cofomf (mgr.14193) 38 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cluster 2026-03-10T12:43:53.933916+0000 mgr.vm06.cofomf (mgr.14193) 38 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cephadm 2026-03-10T12:43:54.130385+0000 mgr.vm06.cofomf (mgr.14193) 39 : cephadm [INF] Reconfiguring mgr.vm06.cofomf (unknown last config time)... 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cephadm 2026-03-10T12:43:54.130385+0000 mgr.vm06.cofomf (mgr.14193) 39 : cephadm [INF] Reconfiguring mgr.vm06.cofomf (unknown last config time)... 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cephadm 2026-03-10T12:43:54.132122+0000 mgr.vm06.cofomf (mgr.14193) 40 : cephadm [INF] Reconfiguring daemon mgr.vm06.cofomf on vm06 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cephadm 2026-03-10T12:43:54.132122+0000 mgr.vm06.cofomf (mgr.14193) 40 : cephadm [INF] Reconfiguring daemon mgr.vm06.cofomf on vm06 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.545333+0000 mon.vm06 (mon.0) 297 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.545333+0000 mon.vm06 (mon.0) 297 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.549554+0000 mon.vm06 (mon.0) 298 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.549554+0000 mon.vm06 (mon.0) 298 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cephadm 2026-03-10T12:43:54.550212+0000 mgr.vm06.cofomf (mgr.14193) 41 : cephadm [INF] Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cephadm 2026-03-10T12:43:54.550212+0000 mgr.vm06.cofomf (mgr.14193) 41 : cephadm [INF] Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.550410+0000 mon.vm06 (mon.0) 299 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.550410+0000 mon.vm06 (mon.0) 299 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.550958+0000 mon.vm06 (mon.0) 300 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.550958+0000 mon.vm06 (mon.0) 300 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cephadm 2026-03-10T12:43:54.551421+0000 mgr.vm06.cofomf (mgr.14193) 42 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-10T12:43:55.549 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: cephadm 2026-03-10T12:43:54.551421+0000 mgr.vm06.cofomf (mgr.14193) 42 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.950366+0000 mon.vm06 (mon.0) 301 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.950366+0000 mon.vm06 (mon.0) 301 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.954360+0000 mon.vm06 (mon.0) 302 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.954360+0000 mon.vm06 (mon.0) 302 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.955044+0000 mon.vm06 (mon.0) 303 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.955044+0000 mon.vm06 (mon.0) 303 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.955503+0000 mon.vm06 (mon.0) 304 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.955503+0000 mon.vm06 (mon.0) 304 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.955927+0000 mon.vm06 (mon.0) 305 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:54.955927+0000 mon.vm06 (mon.0) 305 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:55.328213+0000 mon.vm06 (mon.0) 306 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:55.328213+0000 mon.vm06 (mon.0) 306 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:55.331520+0000 mon.vm06 (mon.0) 307 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:55.331520+0000 mon.vm06 (mon.0) 307 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:55.332178+0000 mon.vm06 (mon.0) 308 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:55.332178+0000 mon.vm06 (mon.0) 308 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:55.332709+0000 mon.vm06 (mon.0) 309 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:55.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:55 vm09 bash[21409]: audit 2026-03-10T12:43:55.332709+0000 mon.vm06 (mon.0) 309 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:55.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cephadm 2026-03-10T12:43:53.724728+0000 mgr.vm06.cofomf (mgr.14193) 36 : cephadm [INF] Reconfiguring crash.vm06 (monmap changed)... 2026-03-10T12:43:55.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cephadm 2026-03-10T12:43:53.724728+0000 mgr.vm06.cofomf (mgr.14193) 36 : cephadm [INF] Reconfiguring crash.vm06 (monmap changed)... 2026-03-10T12:43:55.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cephadm 2026-03-10T12:43:53.726404+0000 mgr.vm06.cofomf (mgr.14193) 37 : cephadm [INF] Reconfiguring daemon crash.vm06 on vm06 2026-03-10T12:43:55.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cephadm 2026-03-10T12:43:53.726404+0000 mgr.vm06.cofomf (mgr.14193) 37 : cephadm [INF] Reconfiguring daemon crash.vm06 on vm06 2026-03-10T12:43:55.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cluster 2026-03-10T12:43:53.933916+0000 mgr.vm06.cofomf (mgr.14193) 38 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:55.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cluster 2026-03-10T12:43:53.933916+0000 mgr.vm06.cofomf (mgr.14193) 38 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:55.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cephadm 2026-03-10T12:43:54.130385+0000 mgr.vm06.cofomf (mgr.14193) 39 : cephadm [INF] Reconfiguring mgr.vm06.cofomf (unknown last config time)... 2026-03-10T12:43:55.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cephadm 2026-03-10T12:43:54.130385+0000 mgr.vm06.cofomf (mgr.14193) 39 : cephadm [INF] Reconfiguring mgr.vm06.cofomf (unknown last config time)... 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cephadm 2026-03-10T12:43:54.132122+0000 mgr.vm06.cofomf (mgr.14193) 40 : cephadm [INF] Reconfiguring daemon mgr.vm06.cofomf on vm06 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cephadm 2026-03-10T12:43:54.132122+0000 mgr.vm06.cofomf (mgr.14193) 40 : cephadm [INF] Reconfiguring daemon mgr.vm06.cofomf on vm06 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.545333+0000 mon.vm06 (mon.0) 297 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.545333+0000 mon.vm06 (mon.0) 297 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.549554+0000 mon.vm06 (mon.0) 298 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.549554+0000 mon.vm06 (mon.0) 298 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cephadm 2026-03-10T12:43:54.550212+0000 mgr.vm06.cofomf (mgr.14193) 41 : cephadm [INF] Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cephadm 2026-03-10T12:43:54.550212+0000 mgr.vm06.cofomf (mgr.14193) 41 : cephadm [INF] Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.550410+0000 mon.vm06 (mon.0) 299 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.550410+0000 mon.vm06 (mon.0) 299 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.550958+0000 mon.vm06 (mon.0) 300 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.550958+0000 mon.vm06 (mon.0) 300 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cephadm 2026-03-10T12:43:54.551421+0000 mgr.vm06.cofomf (mgr.14193) 42 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: cephadm 2026-03-10T12:43:54.551421+0000 mgr.vm06.cofomf (mgr.14193) 42 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.950366+0000 mon.vm06 (mon.0) 301 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.950366+0000 mon.vm06 (mon.0) 301 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.954360+0000 mon.vm06 (mon.0) 302 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.954360+0000 mon.vm06 (mon.0) 302 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.955044+0000 mon.vm06 (mon.0) 303 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.955044+0000 mon.vm06 (mon.0) 303 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.955503+0000 mon.vm06 (mon.0) 304 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.955503+0000 mon.vm06 (mon.0) 304 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.955927+0000 mon.vm06 (mon.0) 305 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:54.955927+0000 mon.vm06 (mon.0) 305 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:55.328213+0000 mon.vm06 (mon.0) 306 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:55.328213+0000 mon.vm06 (mon.0) 306 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:55.331520+0000 mon.vm06 (mon.0) 307 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:55.331520+0000 mon.vm06 (mon.0) 307 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:55.332178+0000 mon.vm06 (mon.0) 308 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:55.332178+0000 mon.vm06 (mon.0) 308 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:55.332709+0000 mon.vm06 (mon.0) 309 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:55.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:55 vm06 bash[17497]: audit 2026-03-10T12:43:55.332709+0000 mon.vm06 (mon.0) 309 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: cephadm 2026-03-10T12:43:54.954832+0000 mgr.vm06.cofomf (mgr.14193) 43 : cephadm [INF] Reconfiguring mon.vm09 (monmap changed)... 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: cephadm 2026-03-10T12:43:54.954832+0000 mgr.vm06.cofomf (mgr.14193) 43 : cephadm [INF] Reconfiguring mon.vm09 (monmap changed)... 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: cephadm 2026-03-10T12:43:54.956373+0000 mgr.vm06.cofomf (mgr.14193) 44 : cephadm [INF] Reconfiguring daemon mon.vm09 on vm09 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: cephadm 2026-03-10T12:43:54.956373+0000 mgr.vm06.cofomf (mgr.14193) 44 : cephadm [INF] Reconfiguring daemon mon.vm09 on vm09 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: cephadm 2026-03-10T12:43:55.332021+0000 mgr.vm06.cofomf (mgr.14193) 45 : cephadm [INF] Reconfiguring crash.vm09 (monmap changed)... 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: cephadm 2026-03-10T12:43:55.332021+0000 mgr.vm06.cofomf (mgr.14193) 45 : cephadm [INF] Reconfiguring crash.vm09 (monmap changed)... 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: cephadm 2026-03-10T12:43:55.333122+0000 mgr.vm06.cofomf (mgr.14193) 46 : cephadm [INF] Reconfiguring daemon crash.vm09 on vm09 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: cephadm 2026-03-10T12:43:55.333122+0000 mgr.vm06.cofomf (mgr.14193) 46 : cephadm [INF] Reconfiguring daemon crash.vm09 on vm09 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: audit 2026-03-10T12:43:55.752875+0000 mon.vm06 (mon.0) 310 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: audit 2026-03-10T12:43:55.752875+0000 mon.vm06 (mon.0) 310 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: audit 2026-03-10T12:43:55.762848+0000 mon.vm06 (mon.0) 311 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: audit 2026-03-10T12:43:55.762848+0000 mon.vm06 (mon.0) 311 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: audit 2026-03-10T12:43:55.763638+0000 mon.vm06 (mon.0) 312 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.mcduck", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: audit 2026-03-10T12:43:55.763638+0000 mon.vm06 (mon.0) 312 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.mcduck", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: audit 2026-03-10T12:43:55.764193+0000 mon.vm06 (mon.0) 313 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: audit 2026-03-10T12:43:55.764193+0000 mon.vm06 (mon.0) 313 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: audit 2026-03-10T12:43:55.764621+0000 mon.vm06 (mon.0) 314 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:57.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:56 vm06 bash[17497]: audit 2026-03-10T12:43:55.764621+0000 mon.vm06 (mon.0) 314 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cephadm 2026-03-10T12:43:54.954832+0000 mgr.vm06.cofomf (mgr.14193) 43 : cephadm [INF] Reconfiguring mon.vm09 (monmap changed)... 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cephadm 2026-03-10T12:43:54.954832+0000 mgr.vm06.cofomf (mgr.14193) 43 : cephadm [INF] Reconfiguring mon.vm09 (monmap changed)... 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cephadm 2026-03-10T12:43:54.956373+0000 mgr.vm06.cofomf (mgr.14193) 44 : cephadm [INF] Reconfiguring daemon mon.vm09 on vm09 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cephadm 2026-03-10T12:43:54.956373+0000 mgr.vm06.cofomf (mgr.14193) 44 : cephadm [INF] Reconfiguring daemon mon.vm09 on vm09 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cephadm 2026-03-10T12:43:55.332021+0000 mgr.vm06.cofomf (mgr.14193) 45 : cephadm [INF] Reconfiguring crash.vm09 (monmap changed)... 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cephadm 2026-03-10T12:43:55.332021+0000 mgr.vm06.cofomf (mgr.14193) 45 : cephadm [INF] Reconfiguring crash.vm09 (monmap changed)... 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cephadm 2026-03-10T12:43:55.333122+0000 mgr.vm06.cofomf (mgr.14193) 46 : cephadm [INF] Reconfiguring daemon crash.vm09 on vm09 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cephadm 2026-03-10T12:43:55.333122+0000 mgr.vm06.cofomf (mgr.14193) 46 : cephadm [INF] Reconfiguring daemon crash.vm09 on vm09 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:55.752875+0000 mon.vm06 (mon.0) 310 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:55.752875+0000 mon.vm06 (mon.0) 310 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:55.762848+0000 mon.vm06 (mon.0) 311 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:55.762848+0000 mon.vm06 (mon.0) 311 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:55.763638+0000 mon.vm06 (mon.0) 312 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.mcduck", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:55.763638+0000 mon.vm06 (mon.0) 312 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.mcduck", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:55.764193+0000 mon.vm06 (mon.0) 313 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:55.764193+0000 mon.vm06 (mon.0) 313 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:55.764621+0000 mon.vm06 (mon.0) 314 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:57.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:55.764621+0000 mon.vm06 (mon.0) 314 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:57.563 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:43:57.837 INFO:teuthology.orchestra.run.vm06.stdout:# minimal ceph.conf for 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:43:57.837 INFO:teuthology.orchestra.run.vm06.stdout:[global] 2026-03-10T12:43:57.837 INFO:teuthology.orchestra.run.vm06.stdout: fsid = 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:43:57.837 INFO:teuthology.orchestra.run.vm06.stdout: mon_host = [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] 2026-03-10T12:43:57.907 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T12:43:57.907 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T12:43:57.907 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T12:43:57.916 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T12:43:57.916 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:43:57.965 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:43:57.965 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T12:43:57.974 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:43:57.974 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:43:58.025 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T12:43:58.025 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T12:43:58.025 DEBUG:teuthology.orchestra.run.vm06:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T12:43:58.028 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:43:58.028 DEBUG:teuthology.orchestra.run.vm06:> ls /dev/[sv]d? 2026-03-10T12:43:58.072 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vda 2026-03-10T12:43:58.072 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vdb 2026-03-10T12:43:58.072 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vdc 2026-03-10T12:43:58.072 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vdd 2026-03-10T12:43:58.072 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vde 2026-03-10T12:43:58.072 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T12:43:58.073 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T12:43:58.073 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vdb 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: cephadm 2026-03-10T12:43:55.763456+0000 mgr.vm06.cofomf (mgr.14193) 47 : cephadm [INF] Reconfiguring mgr.vm09.mcduck (monmap changed)... 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: cephadm 2026-03-10T12:43:55.763456+0000 mgr.vm06.cofomf (mgr.14193) 47 : cephadm [INF] Reconfiguring mgr.vm09.mcduck (monmap changed)... 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: cephadm 2026-03-10T12:43:55.765083+0000 mgr.vm06.cofomf (mgr.14193) 48 : cephadm [INF] Reconfiguring daemon mgr.vm09.mcduck on vm09 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: cephadm 2026-03-10T12:43:55.765083+0000 mgr.vm06.cofomf (mgr.14193) 48 : cephadm [INF] Reconfiguring daemon mgr.vm09.mcduck on vm09 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: cluster 2026-03-10T12:43:55.934211+0000 mgr.vm06.cofomf (mgr.14193) 49 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: cluster 2026-03-10T12:43:55.934211+0000 mgr.vm06.cofomf (mgr.14193) 49 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.385123+0000 mon.vm06 (mon.0) 315 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.385123+0000 mon.vm06 (mon.0) 315 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.389666+0000 mon.vm06 (mon.0) 316 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.389666+0000 mon.vm06 (mon.0) 316 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.392810+0000 mon.vm06 (mon.0) 317 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.392810+0000 mon.vm06 (mon.0) 317 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.393120+0000 mgr.vm06.cofomf (mgr.14193) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.393120+0000 mgr.vm06.cofomf (mgr.14193) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.393859+0000 mon.vm06 (mon.0) 318 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.393859+0000 mon.vm06 (mon.0) 318 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.394009+0000 mgr.vm06.cofomf (mgr.14193) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.394009+0000 mgr.vm06.cofomf (mgr.14193) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.397657+0000 mon.vm06 (mon.0) 319 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.397657+0000 mon.vm06 (mon.0) 319 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.406900+0000 mon.vm06 (mon.0) 320 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T12:43:58.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.406900+0000 mon.vm06 (mon.0) 320 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.407086+0000 mgr.vm06.cofomf (mgr.14193) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.407086+0000 mgr.vm06.cofomf (mgr.14193) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.407656+0000 mon.vm06 (mon.0) 321 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.407656+0000 mon.vm06 (mon.0) 321 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.407833+0000 mgr.vm06.cofomf (mgr.14193) 53 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.407833+0000 mgr.vm06.cofomf (mgr.14193) 53 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.410985+0000 mon.vm06 (mon.0) 322 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.410985+0000 mon.vm06 (mon.0) 322 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.418065+0000 mon.vm06 (mon.0) 323 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.418065+0000 mon.vm06 (mon.0) 323 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.418278+0000 mgr.vm06.cofomf (mgr.14193) 54 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.418278+0000 mgr.vm06.cofomf (mgr.14193) 54 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.418933+0000 mon.vm06 (mon.0) 324 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.418933+0000 mon.vm06 (mon.0) 324 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.419089+0000 mgr.vm06.cofomf (mgr.14193) 55 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.419089+0000 mgr.vm06.cofomf (mgr.14193) 55 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.422238+0000 mon.vm06 (mon.0) 325 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.422238+0000 mon.vm06 (mon.0) 325 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.456119+0000 mon.vm06 (mon.0) 326 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:43:58.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:57 vm06 bash[17497]: audit 2026-03-10T12:43:57.456119+0000 mon.vm06 (mon.0) 326 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:43:58.101 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vdb 2026-03-10T12:43:58.101 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T12:43:58.101 INFO:teuthology.orchestra.run.vm06.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T12:43:58.101 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:43:58.101 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-10 12:38:12.169214908 +0000 2026-03-10T12:43:58.101 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-10 12:38:11.001214908 +0000 2026-03-10T12:43:58.101 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-10 12:38:11.001214908 +0000 2026-03-10T12:43:58.101 INFO:teuthology.orchestra.run.vm06.stdout: Birth: - 2026-03-10T12:43:58.102 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cephadm 2026-03-10T12:43:55.763456+0000 mgr.vm06.cofomf (mgr.14193) 47 : cephadm [INF] Reconfiguring mgr.vm09.mcduck (monmap changed)... 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cephadm 2026-03-10T12:43:55.763456+0000 mgr.vm06.cofomf (mgr.14193) 47 : cephadm [INF] Reconfiguring mgr.vm09.mcduck (monmap changed)... 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cephadm 2026-03-10T12:43:55.765083+0000 mgr.vm06.cofomf (mgr.14193) 48 : cephadm [INF] Reconfiguring daemon mgr.vm09.mcduck on vm09 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cephadm 2026-03-10T12:43:55.765083+0000 mgr.vm06.cofomf (mgr.14193) 48 : cephadm [INF] Reconfiguring daemon mgr.vm09.mcduck on vm09 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cluster 2026-03-10T12:43:55.934211+0000 mgr.vm06.cofomf (mgr.14193) 49 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: cluster 2026-03-10T12:43:55.934211+0000 mgr.vm06.cofomf (mgr.14193) 49 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.385123+0000 mon.vm06 (mon.0) 315 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.385123+0000 mon.vm06 (mon.0) 315 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.389666+0000 mon.vm06 (mon.0) 316 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.389666+0000 mon.vm06 (mon.0) 316 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.392810+0000 mon.vm06 (mon.0) 317 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.392810+0000 mon.vm06 (mon.0) 317 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.393120+0000 mgr.vm06.cofomf (mgr.14193) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.393120+0000 mgr.vm06.cofomf (mgr.14193) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.393859+0000 mon.vm06 (mon.0) 318 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.393859+0000 mon.vm06 (mon.0) 318 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.394009+0000 mgr.vm06.cofomf (mgr.14193) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.394009+0000 mgr.vm06.cofomf (mgr.14193) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.397657+0000 mon.vm06 (mon.0) 319 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.397657+0000 mon.vm06 (mon.0) 319 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.406900+0000 mon.vm06 (mon.0) 320 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.406900+0000 mon.vm06 (mon.0) 320 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.407086+0000 mgr.vm06.cofomf (mgr.14193) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.407086+0000 mgr.vm06.cofomf (mgr.14193) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.407656+0000 mon.vm06 (mon.0) 321 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.407656+0000 mon.vm06 (mon.0) 321 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.407833+0000 mgr.vm06.cofomf (mgr.14193) 53 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.407833+0000 mgr.vm06.cofomf (mgr.14193) 53 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.410985+0000 mon.vm06 (mon.0) 322 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.410985+0000 mon.vm06 (mon.0) 322 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.418065+0000 mon.vm06 (mon.0) 323 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.418065+0000 mon.vm06 (mon.0) 323 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T12:43:58.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.418278+0000 mgr.vm06.cofomf (mgr.14193) 54 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T12:43:58.110 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.418278+0000 mgr.vm06.cofomf (mgr.14193) 54 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T12:43:58.110 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.418933+0000 mon.vm06 (mon.0) 324 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T12:43:58.110 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.418933+0000 mon.vm06 (mon.0) 324 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T12:43:58.110 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.419089+0000 mgr.vm06.cofomf (mgr.14193) 55 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T12:43:58.110 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.419089+0000 mgr.vm06.cofomf (mgr.14193) 55 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T12:43:58.110 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.422238+0000 mon.vm06 (mon.0) 325 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.110 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.422238+0000 mon.vm06 (mon.0) 325 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:43:58.110 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.456119+0000 mon.vm06 (mon.0) 326 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:43:58.110 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:57 vm09 bash[21409]: audit 2026-03-10T12:43:57.456119+0000 mon.vm06 (mon.0) 326 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:43:58.152 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-10T12:43:58.152 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-10T12:43:58.152 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000181158 s, 2.8 MB/s 2026-03-10T12:43:58.153 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T12:43:58.198 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vdc 2026-03-10T12:43:58.245 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vdc 2026-03-10T12:43:58.245 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T12:43:58.245 INFO:teuthology.orchestra.run.vm06.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T12:43:58.245 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:43:58.245 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-10 12:38:12.177214908 +0000 2026-03-10T12:43:58.245 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-10 12:38:10.993214908 +0000 2026-03-10T12:43:58.245 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-10 12:38:10.993214908 +0000 2026-03-10T12:43:58.245 INFO:teuthology.orchestra.run.vm06.stdout: Birth: - 2026-03-10T12:43:58.245 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T12:43:58.291 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-10T12:43:58.291 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-10T12:43:58.291 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000148748 s, 3.4 MB/s 2026-03-10T12:43:58.292 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T12:43:58.337 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vdd 2026-03-10T12:43:58.384 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vdd 2026-03-10T12:43:58.384 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T12:43:58.384 INFO:teuthology.orchestra.run.vm06.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T12:43:58.384 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:43:58.384 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-10 12:38:12.149214908 +0000 2026-03-10T12:43:58.384 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-10 12:38:10.993214908 +0000 2026-03-10T12:43:58.384 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-10 12:38:10.993214908 +0000 2026-03-10T12:43:58.384 INFO:teuthology.orchestra.run.vm06.stdout: Birth: - 2026-03-10T12:43:58.384 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T12:43:58.431 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-10T12:43:58.431 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-10T12:43:58.431 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000151103 s, 3.4 MB/s 2026-03-10T12:43:58.432 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T12:43:58.478 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vde 2026-03-10T12:43:58.524 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vde 2026-03-10T12:43:58.524 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T12:43:58.524 INFO:teuthology.orchestra.run.vm06.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T12:43:58.524 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:43:58.524 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-10 12:38:12.177214908 +0000 2026-03-10T12:43:58.524 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-10 12:38:10.997214908 +0000 2026-03-10T12:43:58.524 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-10 12:38:10.997214908 +0000 2026-03-10T12:43:58.524 INFO:teuthology.orchestra.run.vm06.stdout: Birth: - 2026-03-10T12:43:58.524 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T12:43:58.572 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-10T12:43:58.572 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-10T12:43:58.572 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000163666 s, 3.1 MB/s 2026-03-10T12:43:58.573 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T12:43:58.621 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:43:58.621 DEBUG:teuthology.orchestra.run.vm09:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T12:43:58.624 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:43:58.625 DEBUG:teuthology.orchestra.run.vm09:> ls /dev/[sv]d? 2026-03-10T12:43:58.669 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vda 2026-03-10T12:43:58.677 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdb 2026-03-10T12:43:58.677 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdc 2026-03-10T12:43:58.677 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdd 2026-03-10T12:43:58.677 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vde 2026-03-10T12:43:58.677 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T12:43:58.677 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T12:43:58.677 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdb 2026-03-10T12:43:58.713 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdb 2026-03-10T12:43:58.714 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T12:43:58.714 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T12:43:58.714 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:43:58.714 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 12:37:40.922384908 +0000 2026-03-10T12:43:58.714 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 12:37:39.814384908 +0000 2026-03-10T12:43:58.714 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 12:37:39.814384908 +0000 2026-03-10T12:43:58.714 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-10T12:43:58.714 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T12:43:58.761 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T12:43:58.798 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T12:43:58.799 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000143228 s, 3.6 MB/s 2026-03-10T12:43:58.799 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T12:43:58.842 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdc 2026-03-10T12:43:58.885 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdc 2026-03-10T12:43:58.911 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T12:43:58.911 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T12:43:58.911 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:43:58.911 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 12:37:40.934384908 +0000 2026-03-10T12:43:58.911 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 12:37:39.822384908 +0000 2026-03-10T12:43:58.911 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 12:37:39.822384908 +0000 2026-03-10T12:43:58.911 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-10T12:43:58.911 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T12:43:58.933 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T12:43:58.933 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T12:43:58.933 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000115155 s, 4.4 MB/s 2026-03-10T12:43:58.933 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T12:43:58.979 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdd 2026-03-10T12:43:59.021 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdd 2026-03-10T12:43:59.025 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T12:43:59.025 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T12:43:59.025 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:43:59.025 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 12:37:40.922384908 +0000 2026-03-10T12:43:59.025 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 12:37:39.810384908 +0000 2026-03-10T12:43:59.025 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 12:37:39.810384908 +0000 2026-03-10T12:43:59.025 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-10T12:43:59.025 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T12:43:59.069 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T12:43:59.069 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T12:43:59.069 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000126546 s, 4.0 MB/s 2026-03-10T12:43:59.070 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T12:43:59.115 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vde 2026-03-10T12:43:59.162 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vde 2026-03-10T12:43:59.162 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T12:43:59.162 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T12:43:59.162 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:43:59.162 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 12:37:40.930384908 +0000 2026-03-10T12:43:59.162 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 12:37:39.814384908 +0000 2026-03-10T12:43:59.162 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 12:37:39.814384908 +0000 2026-03-10T12:43:59.162 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-10T12:43:59.162 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T12:43:59.209 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T12:43:59.209 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T12:43:59.209 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000147797 s, 3.5 MB/s 2026-03-10T12:43:59.210 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T12:43:59.254 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph orch apply osd --all-available-devices 2026-03-10T12:43:59.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:59 vm06 bash[17497]: audit 2026-03-10T12:43:57.837896+0000 mon.vm06 (mon.0) 327 : audit [DBG] from='client.? 192.168.123.106:0/212523336' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:59.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:59 vm06 bash[17497]: audit 2026-03-10T12:43:57.837896+0000 mon.vm06 (mon.0) 327 : audit [DBG] from='client.? 192.168.123.106:0/212523336' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:59.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:59 vm06 bash[17497]: cluster 2026-03-10T12:43:57.934427+0000 mgr.vm06.cofomf (mgr.14193) 56 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:59.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:43:59 vm06 bash[17497]: cluster 2026-03-10T12:43:57.934427+0000 mgr.vm06.cofomf (mgr.14193) 56 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:59.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:59 vm09 bash[21409]: audit 2026-03-10T12:43:57.837896+0000 mon.vm06 (mon.0) 327 : audit [DBG] from='client.? 192.168.123.106:0/212523336' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:59.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:59 vm09 bash[21409]: audit 2026-03-10T12:43:57.837896+0000 mon.vm06 (mon.0) 327 : audit [DBG] from='client.? 192.168.123.106:0/212523336' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:43:59.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:59 vm09 bash[21409]: cluster 2026-03-10T12:43:57.934427+0000 mgr.vm06.cofomf (mgr.14193) 56 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:43:59.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:43:59 vm09 bash[21409]: cluster 2026-03-10T12:43:57.934427+0000 mgr.vm06.cofomf (mgr.14193) 56 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:01.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:01 vm06 bash[17497]: cluster 2026-03-10T12:43:59.934611+0000 mgr.vm06.cofomf (mgr.14193) 57 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:01.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:01 vm06 bash[17497]: cluster 2026-03-10T12:43:59.934611+0000 mgr.vm06.cofomf (mgr.14193) 57 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:01.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:01 vm09 bash[21409]: cluster 2026-03-10T12:43:59.934611+0000 mgr.vm06.cofomf (mgr.14193) 57 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:01.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:01 vm09 bash[21409]: cluster 2026-03-10T12:43:59.934611+0000 mgr.vm06.cofomf (mgr.14193) 57 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:02.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:02 vm06 bash[17497]: audit 2026-03-10T12:44:01.990748+0000 mon.vm06 (mon.0) 328 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:02.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:02 vm06 bash[17497]: audit 2026-03-10T12:44:01.990748+0000 mon.vm06 (mon.0) 328 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:02.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:02 vm09 bash[21409]: audit 2026-03-10T12:44:01.990748+0000 mon.vm06 (mon.0) 328 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:02.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:02 vm09 bash[21409]: audit 2026-03-10T12:44:01.990748+0000 mon.vm06 (mon.0) 328 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:03.245 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm09/config 2026-03-10T12:44:03.521 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled osd.all-available-devices update... 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: cluster 2026-03-10T12:44:01.934857+0000 mgr.vm06.cofomf (mgr.14193) 58 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: cluster 2026-03-10T12:44:01.934857+0000 mgr.vm06.cofomf (mgr.14193) 58 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.131372+0000 mon.vm06 (mon.0) 329 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.131372+0000 mon.vm06 (mon.0) 329 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.136198+0000 mon.vm06 (mon.0) 330 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.136198+0000 mon.vm06 (mon.0) 330 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.498921+0000 mon.vm06 (mon.0) 331 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.498921+0000 mon.vm06 (mon.0) 331 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.504916+0000 mon.vm06 (mon.0) 332 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.504916+0000 mon.vm06 (mon.0) 332 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.506131+0000 mon.vm06 (mon.0) 333 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.506131+0000 mon.vm06 (mon.0) 333 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.506634+0000 mon.vm06 (mon.0) 334 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.506634+0000 mon.vm06 (mon.0) 334 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.511126+0000 mon.vm06 (mon.0) 335 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.535 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:03 vm09 bash[21409]: audit 2026-03-10T12:44:02.511126+0000 mon.vm06 (mon.0) 335 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.591 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-10T12:44:03.591 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd stat -f json 2026-03-10T12:44:03.596 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: cluster 2026-03-10T12:44:01.934857+0000 mgr.vm06.cofomf (mgr.14193) 58 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: cluster 2026-03-10T12:44:01.934857+0000 mgr.vm06.cofomf (mgr.14193) 58 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.131372+0000 mon.vm06 (mon.0) 329 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.131372+0000 mon.vm06 (mon.0) 329 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.136198+0000 mon.vm06 (mon.0) 330 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.136198+0000 mon.vm06 (mon.0) 330 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.498921+0000 mon.vm06 (mon.0) 331 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.498921+0000 mon.vm06 (mon.0) 331 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.504916+0000 mon.vm06 (mon.0) 332 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.504916+0000 mon.vm06 (mon.0) 332 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.506131+0000 mon.vm06 (mon.0) 333 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.506131+0000 mon.vm06 (mon.0) 333 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.506634+0000 mon.vm06 (mon.0) 334 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.506634+0000 mon.vm06 (mon.0) 334 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.511126+0000 mon.vm06 (mon.0) 335 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:03 vm06 bash[17497]: audit 2026-03-10T12:44:02.511126+0000 mon.vm06 (mon.0) 335 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:04.523 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:04 vm09 bash[21409]: audit 2026-03-10T12:44:03.514711+0000 mgr.vm06.cofomf (mgr.14193) 59 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:44:04.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:04 vm09 bash[21409]: audit 2026-03-10T12:44:03.514711+0000 mgr.vm06.cofomf (mgr.14193) 59 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:44:04.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:04 vm09 bash[21409]: cephadm 2026-03-10T12:44:03.515708+0000 mgr.vm06.cofomf (mgr.14193) 60 : cephadm [INF] Marking host: vm06 for OSDSpec preview refresh. 2026-03-10T12:44:04.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:04 vm09 bash[21409]: cephadm 2026-03-10T12:44:03.515708+0000 mgr.vm06.cofomf (mgr.14193) 60 : cephadm [INF] Marking host: vm06 for OSDSpec preview refresh. 2026-03-10T12:44:04.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:04 vm09 bash[21409]: cephadm 2026-03-10T12:44:03.515733+0000 mgr.vm06.cofomf (mgr.14193) 61 : cephadm [INF] Marking host: vm09 for OSDSpec preview refresh. 2026-03-10T12:44:04.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:04 vm09 bash[21409]: cephadm 2026-03-10T12:44:03.515733+0000 mgr.vm06.cofomf (mgr.14193) 61 : cephadm [INF] Marking host: vm09 for OSDSpec preview refresh. 2026-03-10T12:44:04.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:04 vm09 bash[21409]: cephadm 2026-03-10T12:44:03.515882+0000 mgr.vm06.cofomf (mgr.14193) 62 : cephadm [INF] Saving service osd.all-available-devices spec with placement * 2026-03-10T12:44:04.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:04 vm09 bash[21409]: cephadm 2026-03-10T12:44:03.515882+0000 mgr.vm06.cofomf (mgr.14193) 62 : cephadm [INF] Saving service osd.all-available-devices spec with placement * 2026-03-10T12:44:04.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:04 vm09 bash[21409]: audit 2026-03-10T12:44:03.520697+0000 mon.vm06 (mon.0) 336 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:04.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:04 vm09 bash[21409]: audit 2026-03-10T12:44:03.520697+0000 mon.vm06 (mon.0) 336 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:04.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:04 vm09 bash[21409]: audit 2026-03-10T12:44:03.521822+0000 mon.vm06 (mon.0) 337 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:44:04.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:04 vm09 bash[21409]: audit 2026-03-10T12:44:03.521822+0000 mon.vm06 (mon.0) 337 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:44:04.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:04 vm06 bash[17497]: audit 2026-03-10T12:44:03.514711+0000 mgr.vm06.cofomf (mgr.14193) 59 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:44:04.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:04 vm06 bash[17497]: audit 2026-03-10T12:44:03.514711+0000 mgr.vm06.cofomf (mgr.14193) 59 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:44:04.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:04 vm06 bash[17497]: cephadm 2026-03-10T12:44:03.515708+0000 mgr.vm06.cofomf (mgr.14193) 60 : cephadm [INF] Marking host: vm06 for OSDSpec preview refresh. 2026-03-10T12:44:04.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:04 vm06 bash[17497]: cephadm 2026-03-10T12:44:03.515708+0000 mgr.vm06.cofomf (mgr.14193) 60 : cephadm [INF] Marking host: vm06 for OSDSpec preview refresh. 2026-03-10T12:44:04.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:04 vm06 bash[17497]: cephadm 2026-03-10T12:44:03.515733+0000 mgr.vm06.cofomf (mgr.14193) 61 : cephadm [INF] Marking host: vm09 for OSDSpec preview refresh. 2026-03-10T12:44:04.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:04 vm06 bash[17497]: cephadm 2026-03-10T12:44:03.515733+0000 mgr.vm06.cofomf (mgr.14193) 61 : cephadm [INF] Marking host: vm09 for OSDSpec preview refresh. 2026-03-10T12:44:04.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:04 vm06 bash[17497]: cephadm 2026-03-10T12:44:03.515882+0000 mgr.vm06.cofomf (mgr.14193) 62 : cephadm [INF] Saving service osd.all-available-devices spec with placement * 2026-03-10T12:44:04.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:04 vm06 bash[17497]: cephadm 2026-03-10T12:44:03.515882+0000 mgr.vm06.cofomf (mgr.14193) 62 : cephadm [INF] Saving service osd.all-available-devices spec with placement * 2026-03-10T12:44:04.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:04 vm06 bash[17497]: audit 2026-03-10T12:44:03.520697+0000 mon.vm06 (mon.0) 336 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:04.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:04 vm06 bash[17497]: audit 2026-03-10T12:44:03.520697+0000 mon.vm06 (mon.0) 336 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:04.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:04 vm06 bash[17497]: audit 2026-03-10T12:44:03.521822+0000 mon.vm06 (mon.0) 337 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:44:04.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:04 vm06 bash[17497]: audit 2026-03-10T12:44:03.521822+0000 mon.vm06 (mon.0) 337 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:44:05.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:05 vm06 bash[17497]: cluster 2026-03-10T12:44:03.935051+0000 mgr.vm06.cofomf (mgr.14193) 63 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:05.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:05 vm06 bash[17497]: cluster 2026-03-10T12:44:03.935051+0000 mgr.vm06.cofomf (mgr.14193) 63 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:05.858 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:05 vm09 bash[21409]: cluster 2026-03-10T12:44:03.935051+0000 mgr.vm06.cofomf (mgr.14193) 63 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:05.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:05 vm09 bash[21409]: cluster 2026-03-10T12:44:03.935051+0000 mgr.vm06.cofomf (mgr.14193) 63 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:07.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:07 vm06 bash[17497]: cluster 2026-03-10T12:44:05.935208+0000 mgr.vm06.cofomf (mgr.14193) 64 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:07.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:07 vm06 bash[17497]: cluster 2026-03-10T12:44:05.935208+0000 mgr.vm06.cofomf (mgr.14193) 64 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:07.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:07 vm09 bash[21409]: cluster 2026-03-10T12:44:05.935208+0000 mgr.vm06.cofomf (mgr.14193) 64 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:07.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:07 vm09 bash[21409]: cluster 2026-03-10T12:44:05.935208+0000 mgr.vm06.cofomf (mgr.14193) 64 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:08.224 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:44:09.220 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:44:09.234 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:08 vm06 bash[17497]: cluster 2026-03-10T12:44:07.935360+0000 mgr.vm06.cofomf (mgr.14193) 65 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:09.235 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:08 vm06 bash[17497]: cluster 2026-03-10T12:44:07.935360+0000 mgr.vm06.cofomf (mgr.14193) 65 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:09.301 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T12:44:09.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:08 vm09 bash[21409]: cluster 2026-03-10T12:44:07.935360+0000 mgr.vm06.cofomf (mgr.14193) 65 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:09.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:08 vm09 bash[21409]: cluster 2026-03-10T12:44:07.935360+0000 mgr.vm06.cofomf (mgr.14193) 65 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:10.302 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd stat -f json 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.954374+0000 mon.vm06 (mon.0) 338 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.954374+0000 mon.vm06 (mon.0) 338 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.961249+0000 mon.vm06 (mon.0) 339 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.961249+0000 mon.vm06 (mon.0) 339 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.967975+0000 mon.vm06 (mon.0) 340 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.967975+0000 mon.vm06 (mon.0) 340 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.973543+0000 mon.vm06 (mon.0) 341 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.973543+0000 mon.vm06 (mon.0) 341 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.979543+0000 mon.vm06 (mon.0) 342 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.979543+0000 mon.vm06 (mon.0) 342 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.984674+0000 mon.vm06 (mon.0) 343 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.984674+0000 mon.vm06 (mon.0) 343 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.989792+0000 mon.vm06 (mon.0) 344 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.989792+0000 mon.vm06 (mon.0) 344 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.997783+0000 mon.vm06 (mon.0) 345 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.997783+0000 mon.vm06 (mon.0) 345 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.999530+0000 mon.vm06 (mon.0) 346 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:08.999530+0000 mon.vm06 (mon.0) 346 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.000127+0000 mon.vm06 (mon.0) 347 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.000127+0000 mon.vm06 (mon.0) 347 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.007757+0000 mon.vm06 (mon.0) 348 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.007757+0000 mon.vm06 (mon.0) 348 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.009215+0000 mon.vm06 (mon.0) 349 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.009215+0000 mon.vm06 (mon.0) 349 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.011322+0000 mon.vm06 (mon.0) 350 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.011322+0000 mon.vm06 (mon.0) 350 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.011748+0000 mon.vm06 (mon.0) 351 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.011748+0000 mon.vm06 (mon.0) 351 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.013430+0000 mon.vm06 (mon.0) 352 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.013430+0000 mon.vm06 (mon.0) 352 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.013831+0000 mon.vm06 (mon.0) 353 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.013831+0000 mon.vm06 (mon.0) 353 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.216337+0000 mon.vm06 (mon.0) 354 : audit [DBG] from='client.? 192.168.123.106:0/2568544064' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:10.309 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:09 vm06 bash[17497]: audit 2026-03-10T12:44:09.216337+0000 mon.vm06 (mon.0) 354 : audit [DBG] from='client.? 192.168.123.106:0/2568544064' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.954374+0000 mon.vm06 (mon.0) 338 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.954374+0000 mon.vm06 (mon.0) 338 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.961249+0000 mon.vm06 (mon.0) 339 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.961249+0000 mon.vm06 (mon.0) 339 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.967975+0000 mon.vm06 (mon.0) 340 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.967975+0000 mon.vm06 (mon.0) 340 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.973543+0000 mon.vm06 (mon.0) 341 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.973543+0000 mon.vm06 (mon.0) 341 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.979543+0000 mon.vm06 (mon.0) 342 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.979543+0000 mon.vm06 (mon.0) 342 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.984674+0000 mon.vm06 (mon.0) 343 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.984674+0000 mon.vm06 (mon.0) 343 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.989792+0000 mon.vm06 (mon.0) 344 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.989792+0000 mon.vm06 (mon.0) 344 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.997783+0000 mon.vm06 (mon.0) 345 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.997783+0000 mon.vm06 (mon.0) 345 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.999530+0000 mon.vm06 (mon.0) 346 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:08.999530+0000 mon.vm06 (mon.0) 346 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.000127+0000 mon.vm06 (mon.0) 347 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.000127+0000 mon.vm06 (mon.0) 347 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.007757+0000 mon.vm06 (mon.0) 348 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.007757+0000 mon.vm06 (mon.0) 348 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.009215+0000 mon.vm06 (mon.0) 349 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.009215+0000 mon.vm06 (mon.0) 349 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.011322+0000 mon.vm06 (mon.0) 350 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.011322+0000 mon.vm06 (mon.0) 350 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.011748+0000 mon.vm06 (mon.0) 351 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.011748+0000 mon.vm06 (mon.0) 351 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.013430+0000 mon.vm06 (mon.0) 352 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.013430+0000 mon.vm06 (mon.0) 352 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T12:44:10.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.013831+0000 mon.vm06 (mon.0) 353 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:10.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.013831+0000 mon.vm06 (mon.0) 353 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:10.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.216337+0000 mon.vm06 (mon.0) 354 : audit [DBG] from='client.? 192.168.123.106:0/2568544064' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:10.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:09 vm09 bash[21409]: audit 2026-03-10T12:44:09.216337+0000 mon.vm06 (mon.0) 354 : audit [DBG] from='client.? 192.168.123.106:0/2568544064' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:11.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:11 vm06 bash[17497]: cluster 2026-03-10T12:44:09.935520+0000 mgr.vm06.cofomf (mgr.14193) 66 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:11.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:11 vm06 bash[17497]: cluster 2026-03-10T12:44:09.935520+0000 mgr.vm06.cofomf (mgr.14193) 66 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:11.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:11 vm09 bash[21409]: cluster 2026-03-10T12:44:09.935520+0000 mgr.vm06.cofomf (mgr.14193) 66 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:11.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:11 vm09 bash[21409]: cluster 2026-03-10T12:44:09.935520+0000 mgr.vm06.cofomf (mgr.14193) 66 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:13.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:13 vm06 bash[17497]: cluster 2026-03-10T12:44:11.935775+0000 mgr.vm06.cofomf (mgr.14193) 67 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:13.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:13 vm06 bash[17497]: cluster 2026-03-10T12:44:11.935775+0000 mgr.vm06.cofomf (mgr.14193) 67 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:13.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:13 vm09 bash[21409]: cluster 2026-03-10T12:44:11.935775+0000 mgr.vm06.cofomf (mgr.14193) 67 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:13.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:13 vm09 bash[21409]: cluster 2026-03-10T12:44:11.935775+0000 mgr.vm06.cofomf (mgr.14193) 67 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: cluster 2026-03-10T12:44:13.935992+0000 mgr.vm06.cofomf (mgr.14193) 68 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: cluster 2026-03-10T12:44:13.935992+0000 mgr.vm06.cofomf (mgr.14193) 68 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: audit 2026-03-10T12:44:14.928375+0000 mon.vm09 (mon.1) 2 : audit [INF] from='client.? 192.168.123.109:0/2818073312' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7f2eb4cc-66ba-45fb-9311-be96c8a18633"}]: dispatch 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: audit 2026-03-10T12:44:14.928375+0000 mon.vm09 (mon.1) 2 : audit [INF] from='client.? 192.168.123.109:0/2818073312' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7f2eb4cc-66ba-45fb-9311-be96c8a18633"}]: dispatch 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: audit 2026-03-10T12:44:14.929681+0000 mon.vm06 (mon.0) 355 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7f2eb4cc-66ba-45fb-9311-be96c8a18633"}]: dispatch 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: audit 2026-03-10T12:44:14.929681+0000 mon.vm06 (mon.0) 355 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7f2eb4cc-66ba-45fb-9311-be96c8a18633"}]: dispatch 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: audit 2026-03-10T12:44:14.932807+0000 mon.vm06 (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7f2eb4cc-66ba-45fb-9311-be96c8a18633"}]': finished 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: audit 2026-03-10T12:44:14.932807+0000 mon.vm06 (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7f2eb4cc-66ba-45fb-9311-be96c8a18633"}]': finished 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: cluster 2026-03-10T12:44:14.935982+0000 mon.vm06 (mon.0) 357 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: cluster 2026-03-10T12:44:14.935982+0000 mon.vm06 (mon.0) 357 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: audit 2026-03-10T12:44:14.936244+0000 mon.vm06 (mon.0) 358 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: audit 2026-03-10T12:44:14.936244+0000 mon.vm06 (mon.0) 358 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: audit 2026-03-10T12:44:15.019924+0000 mon.vm06 (mon.0) 359 : audit [INF] from='client.? 192.168.123.106:0/3966434281' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bdbd3134-047c-4796-a7c4-704227861edc"}]: dispatch 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: audit 2026-03-10T12:44:15.019924+0000 mon.vm06 (mon.0) 359 : audit [INF] from='client.? 192.168.123.106:0/3966434281' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bdbd3134-047c-4796-a7c4-704227861edc"}]: dispatch 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: audit 2026-03-10T12:44:15.022632+0000 mon.vm06 (mon.0) 360 : audit [INF] from='client.? 192.168.123.106:0/3966434281' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bdbd3134-047c-4796-a7c4-704227861edc"}]': finished 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: audit 2026-03-10T12:44:15.022632+0000 mon.vm06 (mon.0) 360 : audit [INF] from='client.? 192.168.123.106:0/3966434281' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bdbd3134-047c-4796-a7c4-704227861edc"}]': finished 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: cluster 2026-03-10T12:44:15.025728+0000 mon.vm06 (mon.0) 361 : cluster [DBG] osdmap e7: 2 total, 0 up, 2 in 2026-03-10T12:44:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:15 vm06 bash[17497]: cluster 2026-03-10T12:44:15.025728+0000 mon.vm06 (mon.0) 361 : cluster [DBG] osdmap e7: 2 total, 0 up, 2 in 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: cluster 2026-03-10T12:44:13.935992+0000 mgr.vm06.cofomf (mgr.14193) 68 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: cluster 2026-03-10T12:44:13.935992+0000 mgr.vm06.cofomf (mgr.14193) 68 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: audit 2026-03-10T12:44:14.928375+0000 mon.vm09 (mon.1) 2 : audit [INF] from='client.? 192.168.123.109:0/2818073312' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7f2eb4cc-66ba-45fb-9311-be96c8a18633"}]: dispatch 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: audit 2026-03-10T12:44:14.928375+0000 mon.vm09 (mon.1) 2 : audit [INF] from='client.? 192.168.123.109:0/2818073312' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7f2eb4cc-66ba-45fb-9311-be96c8a18633"}]: dispatch 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: audit 2026-03-10T12:44:14.929681+0000 mon.vm06 (mon.0) 355 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7f2eb4cc-66ba-45fb-9311-be96c8a18633"}]: dispatch 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: audit 2026-03-10T12:44:14.929681+0000 mon.vm06 (mon.0) 355 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7f2eb4cc-66ba-45fb-9311-be96c8a18633"}]: dispatch 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: audit 2026-03-10T12:44:14.932807+0000 mon.vm06 (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7f2eb4cc-66ba-45fb-9311-be96c8a18633"}]': finished 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: audit 2026-03-10T12:44:14.932807+0000 mon.vm06 (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7f2eb4cc-66ba-45fb-9311-be96c8a18633"}]': finished 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: cluster 2026-03-10T12:44:14.935982+0000 mon.vm06 (mon.0) 357 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: cluster 2026-03-10T12:44:14.935982+0000 mon.vm06 (mon.0) 357 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: audit 2026-03-10T12:44:14.936244+0000 mon.vm06 (mon.0) 358 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: audit 2026-03-10T12:44:14.936244+0000 mon.vm06 (mon.0) 358 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: audit 2026-03-10T12:44:15.019924+0000 mon.vm06 (mon.0) 359 : audit [INF] from='client.? 192.168.123.106:0/3966434281' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bdbd3134-047c-4796-a7c4-704227861edc"}]: dispatch 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: audit 2026-03-10T12:44:15.019924+0000 mon.vm06 (mon.0) 359 : audit [INF] from='client.? 192.168.123.106:0/3966434281' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bdbd3134-047c-4796-a7c4-704227861edc"}]: dispatch 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: audit 2026-03-10T12:44:15.022632+0000 mon.vm06 (mon.0) 360 : audit [INF] from='client.? 192.168.123.106:0/3966434281' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bdbd3134-047c-4796-a7c4-704227861edc"}]': finished 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: audit 2026-03-10T12:44:15.022632+0000 mon.vm06 (mon.0) 360 : audit [INF] from='client.? 192.168.123.106:0/3966434281' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bdbd3134-047c-4796-a7c4-704227861edc"}]': finished 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: cluster 2026-03-10T12:44:15.025728+0000 mon.vm06 (mon.0) 361 : cluster [DBG] osdmap e7: 2 total, 0 up, 2 in 2026-03-10T12:44:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:15 vm09 bash[21409]: cluster 2026-03-10T12:44:15.025728+0000 mon.vm06 (mon.0) 361 : cluster [DBG] osdmap e7: 2 total, 0 up, 2 in 2026-03-10T12:44:15.867 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:44:16.176 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:44:16.186 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:16 vm06 bash[17497]: audit 2026-03-10T12:44:15.025888+0000 mon.vm06 (mon.0) 362 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:16.186 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:16 vm06 bash[17497]: audit 2026-03-10T12:44:15.025888+0000 mon.vm06 (mon.0) 362 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:16.186 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:16 vm06 bash[17497]: audit 2026-03-10T12:44:15.027280+0000 mon.vm06 (mon.0) 363 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:16.186 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:16 vm06 bash[17497]: audit 2026-03-10T12:44:15.027280+0000 mon.vm06 (mon.0) 363 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:16.186 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:16 vm06 bash[17497]: audit 2026-03-10T12:44:15.591918+0000 mon.vm09 (mon.1) 3 : audit [DBG] from='client.? 192.168.123.109:0/587921013' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:16.186 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:16 vm06 bash[17497]: audit 2026-03-10T12:44:15.591918+0000 mon.vm09 (mon.1) 3 : audit [DBG] from='client.? 192.168.123.109:0/587921013' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:16.186 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:16 vm06 bash[17497]: audit 2026-03-10T12:44:15.680937+0000 mon.vm06 (mon.0) 364 : audit [DBG] from='client.? 192.168.123.106:0/525631607' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:16.186 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:16 vm06 bash[17497]: audit 2026-03-10T12:44:15.680937+0000 mon.vm06 (mon.0) 364 : audit [DBG] from='client.? 192.168.123.106:0/525631607' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:16.246 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":7,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1773146655,"num_remapped_pgs":0} 2026-03-10T12:44:16.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:16 vm09 bash[21409]: audit 2026-03-10T12:44:15.025888+0000 mon.vm06 (mon.0) 362 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:16.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:16 vm09 bash[21409]: audit 2026-03-10T12:44:15.025888+0000 mon.vm06 (mon.0) 362 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:16.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:16 vm09 bash[21409]: audit 2026-03-10T12:44:15.027280+0000 mon.vm06 (mon.0) 363 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:16.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:16 vm09 bash[21409]: audit 2026-03-10T12:44:15.027280+0000 mon.vm06 (mon.0) 363 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:16.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:16 vm09 bash[21409]: audit 2026-03-10T12:44:15.591918+0000 mon.vm09 (mon.1) 3 : audit [DBG] from='client.? 192.168.123.109:0/587921013' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:16.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:16 vm09 bash[21409]: audit 2026-03-10T12:44:15.591918+0000 mon.vm09 (mon.1) 3 : audit [DBG] from='client.? 192.168.123.109:0/587921013' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:16.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:16 vm09 bash[21409]: audit 2026-03-10T12:44:15.680937+0000 mon.vm06 (mon.0) 364 : audit [DBG] from='client.? 192.168.123.106:0/525631607' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:16.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:16 vm09 bash[21409]: audit 2026-03-10T12:44:15.680937+0000 mon.vm06 (mon.0) 364 : audit [DBG] from='client.? 192.168.123.106:0/525631607' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:17.247 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd stat -f json 2026-03-10T12:44:17.253 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:17 vm06 bash[17497]: cluster 2026-03-10T12:44:15.936209+0000 mgr.vm06.cofomf (mgr.14193) 69 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:17.253 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:17 vm06 bash[17497]: cluster 2026-03-10T12:44:15.936209+0000 mgr.vm06.cofomf (mgr.14193) 69 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:17.253 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:17 vm06 bash[17497]: audit 2026-03-10T12:44:16.176877+0000 mon.vm06 (mon.0) 365 : audit [DBG] from='client.? 192.168.123.106:0/3004891685' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:17.253 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:17 vm06 bash[17497]: audit 2026-03-10T12:44:16.176877+0000 mon.vm06 (mon.0) 365 : audit [DBG] from='client.? 192.168.123.106:0/3004891685' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:17.253 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:17 vm06 bash[17497]: audit 2026-03-10T12:44:16.989464+0000 mon.vm06 (mon.0) 366 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:17.253 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:17 vm06 bash[17497]: audit 2026-03-10T12:44:16.989464+0000 mon.vm06 (mon.0) 366 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:17.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:17 vm09 bash[21409]: cluster 2026-03-10T12:44:15.936209+0000 mgr.vm06.cofomf (mgr.14193) 69 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:17.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:17 vm09 bash[21409]: cluster 2026-03-10T12:44:15.936209+0000 mgr.vm06.cofomf (mgr.14193) 69 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:17.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:17 vm09 bash[21409]: audit 2026-03-10T12:44:16.176877+0000 mon.vm06 (mon.0) 365 : audit [DBG] from='client.? 192.168.123.106:0/3004891685' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:17.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:17 vm09 bash[21409]: audit 2026-03-10T12:44:16.176877+0000 mon.vm06 (mon.0) 365 : audit [DBG] from='client.? 192.168.123.106:0/3004891685' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:17.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:17 vm09 bash[21409]: audit 2026-03-10T12:44:16.989464+0000 mon.vm06 (mon.0) 366 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:17.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:17 vm09 bash[21409]: audit 2026-03-10T12:44:16.989464+0000 mon.vm06 (mon.0) 366 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: cluster 2026-03-10T12:44:17.936400+0000 mgr.vm06.cofomf (mgr.14193) 70 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: cluster 2026-03-10T12:44:17.936400+0000 mgr.vm06.cofomf (mgr.14193) 70 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.810647+0000 mon.vm06 (mon.0) 367 : audit [INF] from='client.? 192.168.123.106:0/2354852055' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac7e07e1-6b13-4553-a71e-9ffd56a18bd7"}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.810647+0000 mon.vm06 (mon.0) 367 : audit [INF] from='client.? 192.168.123.106:0/2354852055' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac7e07e1-6b13-4553-a71e-9ffd56a18bd7"}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.813218+0000 mon.vm06 (mon.0) 368 : audit [INF] from='client.? 192.168.123.106:0/2354852055' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ac7e07e1-6b13-4553-a71e-9ffd56a18bd7"}]': finished 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.813218+0000 mon.vm06 (mon.0) 368 : audit [INF] from='client.? 192.168.123.106:0/2354852055' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ac7e07e1-6b13-4553-a71e-9ffd56a18bd7"}]': finished 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: cluster 2026-03-10T12:44:18.816178+0000 mon.vm06 (mon.0) 369 : cluster [DBG] osdmap e8: 3 total, 0 up, 3 in 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: cluster 2026-03-10T12:44:18.816178+0000 mon.vm06 (mon.0) 369 : cluster [DBG] osdmap e8: 3 total, 0 up, 3 in 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.816373+0000 mon.vm06 (mon.0) 370 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.816373+0000 mon.vm06 (mon.0) 370 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.816438+0000 mon.vm06 (mon.0) 371 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.816438+0000 mon.vm06 (mon.0) 371 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.816470+0000 mon.vm06 (mon.0) 372 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.816470+0000 mon.vm06 (mon.0) 372 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.961576+0000 mon.vm09 (mon.1) 4 : audit [INF] from='client.? 192.168.123.109:0/616024113' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fcac5ce6-457a-460f-a4b9-c37d8346929c"}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.961576+0000 mon.vm09 (mon.1) 4 : audit [INF] from='client.? 192.168.123.109:0/616024113' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fcac5ce6-457a-460f-a4b9-c37d8346929c"}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.962995+0000 mon.vm06 (mon.0) 373 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fcac5ce6-457a-460f-a4b9-c37d8346929c"}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.962995+0000 mon.vm06 (mon.0) 373 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fcac5ce6-457a-460f-a4b9-c37d8346929c"}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.965945+0000 mon.vm06 (mon.0) 374 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fcac5ce6-457a-460f-a4b9-c37d8346929c"}]': finished 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.965945+0000 mon.vm06 (mon.0) 374 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fcac5ce6-457a-460f-a4b9-c37d8346929c"}]': finished 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: cluster 2026-03-10T12:44:18.968514+0000 mon.vm06 (mon.0) 375 : cluster [DBG] osdmap e9: 4 total, 0 up, 4 in 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: cluster 2026-03-10T12:44:18.968514+0000 mon.vm06 (mon.0) 375 : cluster [DBG] osdmap e9: 4 total, 0 up, 4 in 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.968946+0000 mon.vm06 (mon.0) 376 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.968946+0000 mon.vm06 (mon.0) 376 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.969310+0000 mon.vm06 (mon.0) 377 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.969310+0000 mon.vm06 (mon.0) 377 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.969772+0000 mon.vm06 (mon.0) 378 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.969772+0000 mon.vm06 (mon.0) 378 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.970115+0000 mon.vm06 (mon.0) 379 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:19 vm06 bash[17497]: audit 2026-03-10T12:44:18.970115+0000 mon.vm06 (mon.0) 379 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: cluster 2026-03-10T12:44:17.936400+0000 mgr.vm06.cofomf (mgr.14193) 70 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: cluster 2026-03-10T12:44:17.936400+0000 mgr.vm06.cofomf (mgr.14193) 70 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.810647+0000 mon.vm06 (mon.0) 367 : audit [INF] from='client.? 192.168.123.106:0/2354852055' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac7e07e1-6b13-4553-a71e-9ffd56a18bd7"}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.810647+0000 mon.vm06 (mon.0) 367 : audit [INF] from='client.? 192.168.123.106:0/2354852055' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac7e07e1-6b13-4553-a71e-9ffd56a18bd7"}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.813218+0000 mon.vm06 (mon.0) 368 : audit [INF] from='client.? 192.168.123.106:0/2354852055' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ac7e07e1-6b13-4553-a71e-9ffd56a18bd7"}]': finished 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.813218+0000 mon.vm06 (mon.0) 368 : audit [INF] from='client.? 192.168.123.106:0/2354852055' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ac7e07e1-6b13-4553-a71e-9ffd56a18bd7"}]': finished 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: cluster 2026-03-10T12:44:18.816178+0000 mon.vm06 (mon.0) 369 : cluster [DBG] osdmap e8: 3 total, 0 up, 3 in 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: cluster 2026-03-10T12:44:18.816178+0000 mon.vm06 (mon.0) 369 : cluster [DBG] osdmap e8: 3 total, 0 up, 3 in 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.816373+0000 mon.vm06 (mon.0) 370 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.816373+0000 mon.vm06 (mon.0) 370 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.816438+0000 mon.vm06 (mon.0) 371 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.816438+0000 mon.vm06 (mon.0) 371 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.816470+0000 mon.vm06 (mon.0) 372 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.816470+0000 mon.vm06 (mon.0) 372 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.961576+0000 mon.vm09 (mon.1) 4 : audit [INF] from='client.? 192.168.123.109:0/616024113' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fcac5ce6-457a-460f-a4b9-c37d8346929c"}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.961576+0000 mon.vm09 (mon.1) 4 : audit [INF] from='client.? 192.168.123.109:0/616024113' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fcac5ce6-457a-460f-a4b9-c37d8346929c"}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.962995+0000 mon.vm06 (mon.0) 373 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fcac5ce6-457a-460f-a4b9-c37d8346929c"}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.962995+0000 mon.vm06 (mon.0) 373 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fcac5ce6-457a-460f-a4b9-c37d8346929c"}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.965945+0000 mon.vm06 (mon.0) 374 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fcac5ce6-457a-460f-a4b9-c37d8346929c"}]': finished 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.965945+0000 mon.vm06 (mon.0) 374 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fcac5ce6-457a-460f-a4b9-c37d8346929c"}]': finished 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: cluster 2026-03-10T12:44:18.968514+0000 mon.vm06 (mon.0) 375 : cluster [DBG] osdmap e9: 4 total, 0 up, 4 in 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: cluster 2026-03-10T12:44:18.968514+0000 mon.vm06 (mon.0) 375 : cluster [DBG] osdmap e9: 4 total, 0 up, 4 in 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.968946+0000 mon.vm06 (mon.0) 376 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.968946+0000 mon.vm06 (mon.0) 376 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.969310+0000 mon.vm06 (mon.0) 377 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.969310+0000 mon.vm06 (mon.0) 377 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.969772+0000 mon.vm06 (mon.0) 378 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.969772+0000 mon.vm06 (mon.0) 378 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.970115+0000 mon.vm06 (mon.0) 379 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:19 vm09 bash[21409]: audit 2026-03-10T12:44:18.970115+0000 mon.vm06 (mon.0) 379 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:20.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:20 vm06 bash[17497]: audit 2026-03-10T12:44:19.440898+0000 mon.vm06 (mon.0) 380 : audit [DBG] from='client.? 192.168.123.106:0/2557231543' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:20.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:20 vm06 bash[17497]: audit 2026-03-10T12:44:19.440898+0000 mon.vm06 (mon.0) 380 : audit [DBG] from='client.? 192.168.123.106:0/2557231543' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:20.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:20 vm06 bash[17497]: audit 2026-03-10T12:44:19.603848+0000 mon.vm09 (mon.1) 5 : audit [DBG] from='client.? 192.168.123.109:0/1967696850' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:20.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:20 vm06 bash[17497]: audit 2026-03-10T12:44:19.603848+0000 mon.vm09 (mon.1) 5 : audit [DBG] from='client.? 192.168.123.109:0/1967696850' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:20.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:20 vm09 bash[21409]: audit 2026-03-10T12:44:19.440898+0000 mon.vm06 (mon.0) 380 : audit [DBG] from='client.? 192.168.123.106:0/2557231543' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:20.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:20 vm09 bash[21409]: audit 2026-03-10T12:44:19.440898+0000 mon.vm06 (mon.0) 380 : audit [DBG] from='client.? 192.168.123.106:0/2557231543' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:20.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:20 vm09 bash[21409]: audit 2026-03-10T12:44:19.603848+0000 mon.vm09 (mon.1) 5 : audit [DBG] from='client.? 192.168.123.109:0/1967696850' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:20.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:20 vm09 bash[21409]: audit 2026-03-10T12:44:19.603848+0000 mon.vm09 (mon.1) 5 : audit [DBG] from='client.? 192.168.123.109:0/1967696850' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:21.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:21 vm06 bash[17497]: cluster 2026-03-10T12:44:19.936585+0000 mgr.vm06.cofomf (mgr.14193) 71 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:21.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:21 vm06 bash[17497]: cluster 2026-03-10T12:44:19.936585+0000 mgr.vm06.cofomf (mgr.14193) 71 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:21.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:21 vm09 bash[21409]: cluster 2026-03-10T12:44:19.936585+0000 mgr.vm06.cofomf (mgr.14193) 71 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:21.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:21 vm09 bash[21409]: cluster 2026-03-10T12:44:19.936585+0000 mgr.vm06.cofomf (mgr.14193) 71 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:21.894 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:44:22.176 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:44:22.236 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":9,"num_osds":4,"num_up_osds":0,"osd_up_since":0,"num_in_osds":4,"osd_in_since":1773146658,"num_remapped_pgs":0} 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: cluster 2026-03-10T12:44:21.936775+0000 mgr.vm06.cofomf (mgr.14193) 72 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: cluster 2026-03-10T12:44:21.936775+0000 mgr.vm06.cofomf (mgr.14193) 72 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.176673+0000 mon.vm06 (mon.0) 381 : audit [DBG] from='client.? 192.168.123.106:0/98288318' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.176673+0000 mon.vm06 (mon.0) 381 : audit [DBG] from='client.? 192.168.123.106:0/98288318' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.705611+0000 mon.vm09 (mon.1) 6 : audit [INF] from='client.? 192.168.123.109:0/692551613' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac094c73-334f-420d-9435-350954d4fcfe"}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.705611+0000 mon.vm09 (mon.1) 6 : audit [INF] from='client.? 192.168.123.109:0/692551613' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac094c73-334f-420d-9435-350954d4fcfe"}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.706955+0000 mon.vm06 (mon.0) 382 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac094c73-334f-420d-9435-350954d4fcfe"}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.706955+0000 mon.vm06 (mon.0) 382 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac094c73-334f-420d-9435-350954d4fcfe"}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.709671+0000 mon.vm06 (mon.0) 383 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ac094c73-334f-420d-9435-350954d4fcfe"}]': finished 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.709671+0000 mon.vm06 (mon.0) 383 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ac094c73-334f-420d-9435-350954d4fcfe"}]': finished 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: cluster 2026-03-10T12:44:22.712155+0000 mon.vm06 (mon.0) 384 : cluster [DBG] osdmap e10: 5 total, 0 up, 5 in 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: cluster 2026-03-10T12:44:22.712155+0000 mon.vm06 (mon.0) 384 : cluster [DBG] osdmap e10: 5 total, 0 up, 5 in 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.712279+0000 mon.vm06 (mon.0) 385 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.712279+0000 mon.vm06 (mon.0) 385 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.712468+0000 mon.vm06 (mon.0) 386 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.712468+0000 mon.vm06 (mon.0) 386 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.712648+0000 mon.vm06 (mon.0) 387 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.712648+0000 mon.vm06 (mon.0) 387 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.712786+0000 mon.vm06 (mon.0) 388 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.712786+0000 mon.vm06 (mon.0) 388 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.712914+0000 mon.vm06 (mon.0) 389 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:23.162 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:23 vm06 bash[17497]: audit 2026-03-10T12:44:22.712914+0000 mon.vm06 (mon.0) 389 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:23.237 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd stat -f json 2026-03-10T12:44:23.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: cluster 2026-03-10T12:44:21.936775+0000 mgr.vm06.cofomf (mgr.14193) 72 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:23.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: cluster 2026-03-10T12:44:21.936775+0000 mgr.vm06.cofomf (mgr.14193) 72 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:23.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.176673+0000 mon.vm06 (mon.0) 381 : audit [DBG] from='client.? 192.168.123.106:0/98288318' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:23.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.176673+0000 mon.vm06 (mon.0) 381 : audit [DBG] from='client.? 192.168.123.106:0/98288318' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:23.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.705611+0000 mon.vm09 (mon.1) 6 : audit [INF] from='client.? 192.168.123.109:0/692551613' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac094c73-334f-420d-9435-350954d4fcfe"}]: dispatch 2026-03-10T12:44:23.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.705611+0000 mon.vm09 (mon.1) 6 : audit [INF] from='client.? 192.168.123.109:0/692551613' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac094c73-334f-420d-9435-350954d4fcfe"}]: dispatch 2026-03-10T12:44:23.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.706955+0000 mon.vm06 (mon.0) 382 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac094c73-334f-420d-9435-350954d4fcfe"}]: dispatch 2026-03-10T12:44:23.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.706955+0000 mon.vm06 (mon.0) 382 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac094c73-334f-420d-9435-350954d4fcfe"}]: dispatch 2026-03-10T12:44:23.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.709671+0000 mon.vm06 (mon.0) 383 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ac094c73-334f-420d-9435-350954d4fcfe"}]': finished 2026-03-10T12:44:23.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.709671+0000 mon.vm06 (mon.0) 383 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ac094c73-334f-420d-9435-350954d4fcfe"}]': finished 2026-03-10T12:44:23.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: cluster 2026-03-10T12:44:22.712155+0000 mon.vm06 (mon.0) 384 : cluster [DBG] osdmap e10: 5 total, 0 up, 5 in 2026-03-10T12:44:23.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: cluster 2026-03-10T12:44:22.712155+0000 mon.vm06 (mon.0) 384 : cluster [DBG] osdmap e10: 5 total, 0 up, 5 in 2026-03-10T12:44:23.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.712279+0000 mon.vm06 (mon.0) 385 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:23.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.712279+0000 mon.vm06 (mon.0) 385 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:23.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.712468+0000 mon.vm06 (mon.0) 386 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:23.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.712468+0000 mon.vm06 (mon.0) 386 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:23.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.712648+0000 mon.vm06 (mon.0) 387 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:23.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.712648+0000 mon.vm06 (mon.0) 387 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:23.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.712786+0000 mon.vm06 (mon.0) 388 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:23.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.712786+0000 mon.vm06 (mon.0) 388 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:23.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.712914+0000 mon.vm06 (mon.0) 389 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:23.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:23 vm09 bash[21409]: audit 2026-03-10T12:44:22.712914+0000 mon.vm06 (mon.0) 389 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.075891+0000 mon.vm06 (mon.0) 390 : audit [INF] from='client.? 192.168.123.106:0/1721815033' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "11f6c435-3f65-46bf-a53f-4c9da72c0aa3"}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.075891+0000 mon.vm06 (mon.0) 390 : audit [INF] from='client.? 192.168.123.106:0/1721815033' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "11f6c435-3f65-46bf-a53f-4c9da72c0aa3"}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.079026+0000 mon.vm06 (mon.0) 391 : audit [INF] from='client.? 192.168.123.106:0/1721815033' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "11f6c435-3f65-46bf-a53f-4c9da72c0aa3"}]': finished 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.079026+0000 mon.vm06 (mon.0) 391 : audit [INF] from='client.? 192.168.123.106:0/1721815033' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "11f6c435-3f65-46bf-a53f-4c9da72c0aa3"}]': finished 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: cluster 2026-03-10T12:44:23.081761+0000 mon.vm06 (mon.0) 392 : cluster [DBG] osdmap e11: 6 total, 0 up, 6 in 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: cluster 2026-03-10T12:44:23.081761+0000 mon.vm06 (mon.0) 392 : cluster [DBG] osdmap e11: 6 total, 0 up, 6 in 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.081984+0000 mon.vm06 (mon.0) 393 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.081984+0000 mon.vm06 (mon.0) 393 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.082716+0000 mon.vm06 (mon.0) 394 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.082716+0000 mon.vm06 (mon.0) 394 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.083098+0000 mon.vm06 (mon.0) 395 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.083098+0000 mon.vm06 (mon.0) 395 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.083763+0000 mon.vm06 (mon.0) 396 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.083763+0000 mon.vm06 (mon.0) 396 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.084403+0000 mon.vm06 (mon.0) 397 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.084403+0000 mon.vm06 (mon.0) 397 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.084576+0000 mon.vm06 (mon.0) 398 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.084576+0000 mon.vm06 (mon.0) 398 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.366132+0000 mon.vm09 (mon.1) 7 : audit [DBG] from='client.? 192.168.123.109:0/3703017833' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.366132+0000 mon.vm09 (mon.1) 7 : audit [DBG] from='client.? 192.168.123.109:0/3703017833' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.715109+0000 mon.vm06 (mon.0) 399 : audit [DBG] from='client.? 192.168.123.106:0/2501324739' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:24.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:24 vm06 bash[17497]: audit 2026-03-10T12:44:23.715109+0000 mon.vm06 (mon.0) 399 : audit [DBG] from='client.? 192.168.123.106:0/2501324739' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.075891+0000 mon.vm06 (mon.0) 390 : audit [INF] from='client.? 192.168.123.106:0/1721815033' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "11f6c435-3f65-46bf-a53f-4c9da72c0aa3"}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.075891+0000 mon.vm06 (mon.0) 390 : audit [INF] from='client.? 192.168.123.106:0/1721815033' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "11f6c435-3f65-46bf-a53f-4c9da72c0aa3"}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.079026+0000 mon.vm06 (mon.0) 391 : audit [INF] from='client.? 192.168.123.106:0/1721815033' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "11f6c435-3f65-46bf-a53f-4c9da72c0aa3"}]': finished 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.079026+0000 mon.vm06 (mon.0) 391 : audit [INF] from='client.? 192.168.123.106:0/1721815033' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "11f6c435-3f65-46bf-a53f-4c9da72c0aa3"}]': finished 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: cluster 2026-03-10T12:44:23.081761+0000 mon.vm06 (mon.0) 392 : cluster [DBG] osdmap e11: 6 total, 0 up, 6 in 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: cluster 2026-03-10T12:44:23.081761+0000 mon.vm06 (mon.0) 392 : cluster [DBG] osdmap e11: 6 total, 0 up, 6 in 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.081984+0000 mon.vm06 (mon.0) 393 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.081984+0000 mon.vm06 (mon.0) 393 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.082716+0000 mon.vm06 (mon.0) 394 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.082716+0000 mon.vm06 (mon.0) 394 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.083098+0000 mon.vm06 (mon.0) 395 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.083098+0000 mon.vm06 (mon.0) 395 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.083763+0000 mon.vm06 (mon.0) 396 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.083763+0000 mon.vm06 (mon.0) 396 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.084403+0000 mon.vm06 (mon.0) 397 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.084403+0000 mon.vm06 (mon.0) 397 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.084576+0000 mon.vm06 (mon.0) 398 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.084576+0000 mon.vm06 (mon.0) 398 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.366132+0000 mon.vm09 (mon.1) 7 : audit [DBG] from='client.? 192.168.123.109:0/3703017833' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.366132+0000 mon.vm09 (mon.1) 7 : audit [DBG] from='client.? 192.168.123.109:0/3703017833' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.715109+0000 mon.vm06 (mon.0) 399 : audit [DBG] from='client.? 192.168.123.106:0/2501324739' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:24 vm09 bash[21409]: audit 2026-03-10T12:44:23.715109+0000 mon.vm06 (mon.0) 399 : audit [DBG] from='client.? 192.168.123.106:0/2501324739' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:25.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:25 vm06 bash[17497]: cluster 2026-03-10T12:44:23.936944+0000 mgr.vm06.cofomf (mgr.14193) 73 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:25.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:25 vm06 bash[17497]: cluster 2026-03-10T12:44:23.936944+0000 mgr.vm06.cofomf (mgr.14193) 73 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:25.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:25 vm09 bash[21409]: cluster 2026-03-10T12:44:23.936944+0000 mgr.vm06.cofomf (mgr.14193) 73 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:25.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:25 vm09 bash[21409]: cluster 2026-03-10T12:44:23.936944+0000 mgr.vm06.cofomf (mgr.14193) 73 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: cluster 2026-03-10T12:44:25.937112+0000 mgr.vm06.cofomf (mgr.14193) 74 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: cluster 2026-03-10T12:44:25.937112+0000 mgr.vm06.cofomf (mgr.14193) 74 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.498616+0000 mon.vm09 (mon.1) 8 : audit [INF] from='client.? 192.168.123.109:0/1970206096' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9d349e15-2ef2-47c0-87db-887b3e5b91c1"}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.498616+0000 mon.vm09 (mon.1) 8 : audit [INF] from='client.? 192.168.123.109:0/1970206096' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9d349e15-2ef2-47c0-87db-887b3e5b91c1"}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.499945+0000 mon.vm06 (mon.0) 400 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9d349e15-2ef2-47c0-87db-887b3e5b91c1"}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.499945+0000 mon.vm06 (mon.0) 400 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9d349e15-2ef2-47c0-87db-887b3e5b91c1"}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.502729+0000 mon.vm06 (mon.0) 401 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9d349e15-2ef2-47c0-87db-887b3e5b91c1"}]': finished 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.502729+0000 mon.vm06 (mon.0) 401 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9d349e15-2ef2-47c0-87db-887b3e5b91c1"}]': finished 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: cluster 2026-03-10T12:44:26.505128+0000 mon.vm06 (mon.0) 402 : cluster [DBG] osdmap e12: 7 total, 0 up, 7 in 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: cluster 2026-03-10T12:44:26.505128+0000 mon.vm06 (mon.0) 402 : cluster [DBG] osdmap e12: 7 total, 0 up, 7 in 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.505251+0000 mon.vm06 (mon.0) 403 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.505251+0000 mon.vm06 (mon.0) 403 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.505460+0000 mon.vm06 (mon.0) 404 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.505460+0000 mon.vm06 (mon.0) 404 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.505694+0000 mon.vm06 (mon.0) 405 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.505694+0000 mon.vm06 (mon.0) 405 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.505816+0000 mon.vm06 (mon.0) 406 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.505816+0000 mon.vm06 (mon.0) 406 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.505930+0000 mon.vm06 (mon.0) 407 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.505930+0000 mon.vm06 (mon.0) 407 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.506038+0000 mon.vm06 (mon.0) 408 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.506038+0000 mon.vm06 (mon.0) 408 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.506150+0000 mon.vm06 (mon.0) 409 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:26.506150+0000 mon.vm06 (mon.0) 409 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:27.109458+0000 mon.vm09 (mon.1) 9 : audit [DBG] from='client.? 192.168.123.109:0/3636525106' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:27.503 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:27 vm06 bash[17497]: audit 2026-03-10T12:44:27.109458+0000 mon.vm09 (mon.1) 9 : audit [DBG] from='client.? 192.168.123.109:0/3636525106' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: cluster 2026-03-10T12:44:25.937112+0000 mgr.vm06.cofomf (mgr.14193) 74 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: cluster 2026-03-10T12:44:25.937112+0000 mgr.vm06.cofomf (mgr.14193) 74 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.498616+0000 mon.vm09 (mon.1) 8 : audit [INF] from='client.? 192.168.123.109:0/1970206096' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9d349e15-2ef2-47c0-87db-887b3e5b91c1"}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.498616+0000 mon.vm09 (mon.1) 8 : audit [INF] from='client.? 192.168.123.109:0/1970206096' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9d349e15-2ef2-47c0-87db-887b3e5b91c1"}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.499945+0000 mon.vm06 (mon.0) 400 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9d349e15-2ef2-47c0-87db-887b3e5b91c1"}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.499945+0000 mon.vm06 (mon.0) 400 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9d349e15-2ef2-47c0-87db-887b3e5b91c1"}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.502729+0000 mon.vm06 (mon.0) 401 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9d349e15-2ef2-47c0-87db-887b3e5b91c1"}]': finished 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.502729+0000 mon.vm06 (mon.0) 401 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9d349e15-2ef2-47c0-87db-887b3e5b91c1"}]': finished 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: cluster 2026-03-10T12:44:26.505128+0000 mon.vm06 (mon.0) 402 : cluster [DBG] osdmap e12: 7 total, 0 up, 7 in 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: cluster 2026-03-10T12:44:26.505128+0000 mon.vm06 (mon.0) 402 : cluster [DBG] osdmap e12: 7 total, 0 up, 7 in 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.505251+0000 mon.vm06 (mon.0) 403 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.505251+0000 mon.vm06 (mon.0) 403 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.505460+0000 mon.vm06 (mon.0) 404 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.505460+0000 mon.vm06 (mon.0) 404 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.505694+0000 mon.vm06 (mon.0) 405 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.505694+0000 mon.vm06 (mon.0) 405 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.505816+0000 mon.vm06 (mon.0) 406 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.505816+0000 mon.vm06 (mon.0) 406 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.505930+0000 mon.vm06 (mon.0) 407 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.505930+0000 mon.vm06 (mon.0) 407 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.506038+0000 mon.vm06 (mon.0) 408 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.506038+0000 mon.vm06 (mon.0) 408 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.506150+0000 mon.vm06 (mon.0) 409 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:26.506150+0000 mon.vm06 (mon.0) 409 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:27.109458+0000 mon.vm09 (mon.1) 9 : audit [DBG] from='client.? 192.168.123.109:0/3636525106' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:27 vm09 bash[21409]: audit 2026-03-10T12:44:27.109458+0000 mon.vm09 (mon.1) 9 : audit [DBG] from='client.? 192.168.123.109:0/3636525106' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:27.897 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:44:28.178 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:44:28.230 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773146667,"num_remapped_pgs":0} 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.405527+0000 mon.vm06 (mon.0) 410 : audit [INF] from='client.? 192.168.123.106:0/2929568123' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "96013d1a-8fdb-4e98-8244-f62c64e15111"}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.405527+0000 mon.vm06 (mon.0) 410 : audit [INF] from='client.? 192.168.123.106:0/2929568123' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "96013d1a-8fdb-4e98-8244-f62c64e15111"}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.408581+0000 mon.vm06 (mon.0) 411 : audit [INF] from='client.? 192.168.123.106:0/2929568123' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "96013d1a-8fdb-4e98-8244-f62c64e15111"}]': finished 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.408581+0000 mon.vm06 (mon.0) 411 : audit [INF] from='client.? 192.168.123.106:0/2929568123' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "96013d1a-8fdb-4e98-8244-f62c64e15111"}]': finished 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: cluster 2026-03-10T12:44:27.410598+0000 mon.vm06 (mon.0) 412 : cluster [DBG] osdmap e13: 8 total, 0 up, 8 in 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: cluster 2026-03-10T12:44:27.410598+0000 mon.vm06 (mon.0) 412 : cluster [DBG] osdmap e13: 8 total, 0 up, 8 in 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.410756+0000 mon.vm06 (mon.0) 413 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.410756+0000 mon.vm06 (mon.0) 413 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.410818+0000 mon.vm06 (mon.0) 414 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.410818+0000 mon.vm06 (mon.0) 414 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.410858+0000 mon.vm06 (mon.0) 415 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.410858+0000 mon.vm06 (mon.0) 415 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.410897+0000 mon.vm06 (mon.0) 416 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.410897+0000 mon.vm06 (mon.0) 416 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.410935+0000 mon.vm06 (mon.0) 417 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.410935+0000 mon.vm06 (mon.0) 417 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.410988+0000 mon.vm06 (mon.0) 418 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.410988+0000 mon.vm06 (mon.0) 418 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.411019+0000 mon.vm06 (mon.0) 419 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.411019+0000 mon.vm06 (mon.0) 419 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.411046+0000 mon.vm06 (mon.0) 420 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:27.411046+0000 mon.vm06 (mon.0) 420 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:28.055248+0000 mon.vm06 (mon.0) 421 : audit [DBG] from='client.? 192.168.123.106:0/2886924125' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:28.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:28 vm06 bash[17497]: audit 2026-03-10T12:44:28.055248+0000 mon.vm06 (mon.0) 421 : audit [DBG] from='client.? 192.168.123.106:0/2886924125' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.405527+0000 mon.vm06 (mon.0) 410 : audit [INF] from='client.? 192.168.123.106:0/2929568123' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "96013d1a-8fdb-4e98-8244-f62c64e15111"}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.405527+0000 mon.vm06 (mon.0) 410 : audit [INF] from='client.? 192.168.123.106:0/2929568123' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "96013d1a-8fdb-4e98-8244-f62c64e15111"}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.408581+0000 mon.vm06 (mon.0) 411 : audit [INF] from='client.? 192.168.123.106:0/2929568123' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "96013d1a-8fdb-4e98-8244-f62c64e15111"}]': finished 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.408581+0000 mon.vm06 (mon.0) 411 : audit [INF] from='client.? 192.168.123.106:0/2929568123' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "96013d1a-8fdb-4e98-8244-f62c64e15111"}]': finished 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: cluster 2026-03-10T12:44:27.410598+0000 mon.vm06 (mon.0) 412 : cluster [DBG] osdmap e13: 8 total, 0 up, 8 in 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: cluster 2026-03-10T12:44:27.410598+0000 mon.vm06 (mon.0) 412 : cluster [DBG] osdmap e13: 8 total, 0 up, 8 in 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.410756+0000 mon.vm06 (mon.0) 413 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.410756+0000 mon.vm06 (mon.0) 413 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.410818+0000 mon.vm06 (mon.0) 414 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.410818+0000 mon.vm06 (mon.0) 414 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.410858+0000 mon.vm06 (mon.0) 415 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.410858+0000 mon.vm06 (mon.0) 415 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.410897+0000 mon.vm06 (mon.0) 416 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.410897+0000 mon.vm06 (mon.0) 416 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.410935+0000 mon.vm06 (mon.0) 417 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.410935+0000 mon.vm06 (mon.0) 417 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.410988+0000 mon.vm06 (mon.0) 418 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.410988+0000 mon.vm06 (mon.0) 418 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.411019+0000 mon.vm06 (mon.0) 419 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.411019+0000 mon.vm06 (mon.0) 419 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.411046+0000 mon.vm06 (mon.0) 420 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:27.411046+0000 mon.vm06 (mon.0) 420 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:28.055248+0000 mon.vm06 (mon.0) 421 : audit [DBG] from='client.? 192.168.123.106:0/2886924125' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:28.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:28 vm09 bash[21409]: audit 2026-03-10T12:44:28.055248+0000 mon.vm06 (mon.0) 421 : audit [DBG] from='client.? 192.168.123.106:0/2886924125' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:44:29.230 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd stat -f json 2026-03-10T12:44:29.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:29 vm06 bash[17497]: cluster 2026-03-10T12:44:27.937496+0000 mgr.vm06.cofomf (mgr.14193) 75 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:29.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:29 vm06 bash[17497]: cluster 2026-03-10T12:44:27.937496+0000 mgr.vm06.cofomf (mgr.14193) 75 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:29.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:29 vm06 bash[17497]: audit 2026-03-10T12:44:28.178875+0000 mon.vm06 (mon.0) 422 : audit [DBG] from='client.? 192.168.123.106:0/1721214608' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:29.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:29 vm06 bash[17497]: audit 2026-03-10T12:44:28.178875+0000 mon.vm06 (mon.0) 422 : audit [DBG] from='client.? 192.168.123.106:0/1721214608' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:29.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:29 vm09 bash[21409]: cluster 2026-03-10T12:44:27.937496+0000 mgr.vm06.cofomf (mgr.14193) 75 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:29.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:29 vm09 bash[21409]: cluster 2026-03-10T12:44:27.937496+0000 mgr.vm06.cofomf (mgr.14193) 75 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:29.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:29 vm09 bash[21409]: audit 2026-03-10T12:44:28.178875+0000 mon.vm06 (mon.0) 422 : audit [DBG] from='client.? 192.168.123.106:0/1721214608' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:29.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:29 vm09 bash[21409]: audit 2026-03-10T12:44:28.178875+0000 mon.vm06 (mon.0) 422 : audit [DBG] from='client.? 192.168.123.106:0/1721214608' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:31.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:31 vm06 bash[17497]: cluster 2026-03-10T12:44:29.937658+0000 mgr.vm06.cofomf (mgr.14193) 76 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:31.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:31 vm06 bash[17497]: cluster 2026-03-10T12:44:29.937658+0000 mgr.vm06.cofomf (mgr.14193) 76 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:31.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:31 vm09 bash[21409]: cluster 2026-03-10T12:44:29.937658+0000 mgr.vm06.cofomf (mgr.14193) 76 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:31.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:31 vm09 bash[21409]: cluster 2026-03-10T12:44:29.937658+0000 mgr.vm06.cofomf (mgr.14193) 76 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:32.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:32 vm06 bash[17497]: audit 2026-03-10T12:44:31.989501+0000 mon.vm06 (mon.0) 423 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:32.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:32 vm06 bash[17497]: audit 2026-03-10T12:44:31.989501+0000 mon.vm06 (mon.0) 423 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:32.608 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:32 vm09 bash[21409]: audit 2026-03-10T12:44:31.989501+0000 mon.vm06 (mon.0) 423 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:32.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:32 vm09 bash[21409]: audit 2026-03-10T12:44:31.989501+0000 mon.vm06 (mon.0) 423 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:33.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:33 vm06 bash[17497]: cluster 2026-03-10T12:44:31.937811+0000 mgr.vm06.cofomf (mgr.14193) 77 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:33.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:33 vm06 bash[17497]: cluster 2026-03-10T12:44:31.937811+0000 mgr.vm06.cofomf (mgr.14193) 77 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:33.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:33 vm09 bash[21409]: cluster 2026-03-10T12:44:31.937811+0000 mgr.vm06.cofomf (mgr.14193) 77 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:33.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:33 vm09 bash[21409]: cluster 2026-03-10T12:44:31.937811+0000 mgr.vm06.cofomf (mgr.14193) 77 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:33.882 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:44:34.125 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:44:34.192 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773146667,"num_remapped_pgs":0} 2026-03-10T12:44:34.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:34 vm06 bash[17497]: audit 2026-03-10T12:44:34.126473+0000 mon.vm06 (mon.0) 424 : audit [DBG] from='client.? 192.168.123.106:0/3652146787' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:34.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:34 vm06 bash[17497]: audit 2026-03-10T12:44:34.126473+0000 mon.vm06 (mon.0) 424 : audit [DBG] from='client.? 192.168.123.106:0/3652146787' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:34.608 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:34 vm09 bash[21409]: audit 2026-03-10T12:44:34.126473+0000 mon.vm06 (mon.0) 424 : audit [DBG] from='client.? 192.168.123.106:0/3652146787' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:34.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:34 vm09 bash[21409]: audit 2026-03-10T12:44:34.126473+0000 mon.vm06 (mon.0) 424 : audit [DBG] from='client.? 192.168.123.106:0/3652146787' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:35.193 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd stat -f json 2026-03-10T12:44:35.343 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:35 vm09 bash[21409]: cluster 2026-03-10T12:44:33.937975+0000 mgr.vm06.cofomf (mgr.14193) 78 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:35.343 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:35 vm09 bash[21409]: cluster 2026-03-10T12:44:33.937975+0000 mgr.vm06.cofomf (mgr.14193) 78 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:35.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:35 vm06 bash[17497]: cluster 2026-03-10T12:44:33.937975+0000 mgr.vm06.cofomf (mgr.14193) 78 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:35.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:35 vm06 bash[17497]: cluster 2026-03-10T12:44:33.937975+0000 mgr.vm06.cofomf (mgr.14193) 78 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:36.474 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:36 vm09 bash[21409]: audit 2026-03-10T12:44:35.683092+0000 mon.vm06 (mon.0) 425 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T12:44:36.474 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:36 vm09 bash[21409]: audit 2026-03-10T12:44:35.683092+0000 mon.vm06 (mon.0) 425 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T12:44:36.474 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:36 vm09 bash[21409]: audit 2026-03-10T12:44:35.683739+0000 mon.vm06 (mon.0) 426 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:36.474 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:36 vm09 bash[21409]: audit 2026-03-10T12:44:35.683739+0000 mon.vm06 (mon.0) 426 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:36.474 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:36 vm09 bash[21409]: cephadm 2026-03-10T12:44:35.684243+0000 mgr.vm06.cofomf (mgr.14193) 79 : cephadm [INF] Deploying daemon osd.0 on vm09 2026-03-10T12:44:36.474 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:36 vm09 bash[21409]: cephadm 2026-03-10T12:44:35.684243+0000 mgr.vm06.cofomf (mgr.14193) 79 : cephadm [INF] Deploying daemon osd.0 on vm09 2026-03-10T12:44:36.582 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:36 vm06 bash[17497]: audit 2026-03-10T12:44:35.683092+0000 mon.vm06 (mon.0) 425 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T12:44:36.582 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:36 vm06 bash[17497]: audit 2026-03-10T12:44:35.683092+0000 mon.vm06 (mon.0) 425 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T12:44:36.582 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:36 vm06 bash[17497]: audit 2026-03-10T12:44:35.683739+0000 mon.vm06 (mon.0) 426 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:36.582 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:36 vm06 bash[17497]: audit 2026-03-10T12:44:35.683739+0000 mon.vm06 (mon.0) 426 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:36.582 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:36 vm06 bash[17497]: cephadm 2026-03-10T12:44:35.684243+0000 mgr.vm06.cofomf (mgr.14193) 79 : cephadm [INF] Deploying daemon osd.0 on vm09 2026-03-10T12:44:36.582 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:36 vm06 bash[17497]: cephadm 2026-03-10T12:44:35.684243+0000 mgr.vm06.cofomf (mgr.14193) 79 : cephadm [INF] Deploying daemon osd.0 on vm09 2026-03-10T12:44:36.768 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:36 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:36.768 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:36 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:37.181 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: cluster 2026-03-10T12:44:35.938136+0000 mgr.vm06.cofomf (mgr.14193) 80 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: cluster 2026-03-10T12:44:35.938136+0000 mgr.vm06.cofomf (mgr.14193) 80 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: audit 2026-03-10T12:44:36.318855+0000 mon.vm06 (mon.0) 427 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: audit 2026-03-10T12:44:36.318855+0000 mon.vm06 (mon.0) 427 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: audit 2026-03-10T12:44:36.319601+0000 mon.vm06 (mon.0) 428 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: audit 2026-03-10T12:44:36.319601+0000 mon.vm06 (mon.0) 428 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: cephadm 2026-03-10T12:44:36.320191+0000 mgr.vm06.cofomf (mgr.14193) 81 : cephadm [INF] Deploying daemon osd.1 on vm06 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: cephadm 2026-03-10T12:44:36.320191+0000 mgr.vm06.cofomf (mgr.14193) 81 : cephadm [INF] Deploying daemon osd.1 on vm06 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: audit 2026-03-10T12:44:36.797368+0000 mon.vm06 (mon.0) 429 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: audit 2026-03-10T12:44:36.797368+0000 mon.vm06 (mon.0) 429 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: audit 2026-03-10T12:44:36.802286+0000 mon.vm06 (mon.0) 430 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: audit 2026-03-10T12:44:36.802286+0000 mon.vm06 (mon.0) 430 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: audit 2026-03-10T12:44:36.802910+0000 mon.vm06 (mon.0) 431 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: audit 2026-03-10T12:44:36.802910+0000 mon.vm06 (mon.0) 431 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: audit 2026-03-10T12:44:36.803535+0000 mon.vm06 (mon.0) 432 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 bash[21409]: audit 2026-03-10T12:44:36.803535+0000 mon.vm06 (mon.0) 432 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:37.487 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: cluster 2026-03-10T12:44:35.938136+0000 mgr.vm06.cofomf (mgr.14193) 80 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: cluster 2026-03-10T12:44:35.938136+0000 mgr.vm06.cofomf (mgr.14193) 80 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: audit 2026-03-10T12:44:36.318855+0000 mon.vm06 (mon.0) 427 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: audit 2026-03-10T12:44:36.318855+0000 mon.vm06 (mon.0) 427 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: audit 2026-03-10T12:44:36.319601+0000 mon.vm06 (mon.0) 428 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: audit 2026-03-10T12:44:36.319601+0000 mon.vm06 (mon.0) 428 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: cephadm 2026-03-10T12:44:36.320191+0000 mgr.vm06.cofomf (mgr.14193) 81 : cephadm [INF] Deploying daemon osd.1 on vm06 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: cephadm 2026-03-10T12:44:36.320191+0000 mgr.vm06.cofomf (mgr.14193) 81 : cephadm [INF] Deploying daemon osd.1 on vm06 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: audit 2026-03-10T12:44:36.797368+0000 mon.vm06 (mon.0) 429 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: audit 2026-03-10T12:44:36.797368+0000 mon.vm06 (mon.0) 429 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: audit 2026-03-10T12:44:36.802286+0000 mon.vm06 (mon.0) 430 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: audit 2026-03-10T12:44:36.802286+0000 mon.vm06 (mon.0) 430 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: audit 2026-03-10T12:44:36.802910+0000 mon.vm06 (mon.0) 431 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: audit 2026-03-10T12:44:36.802910+0000 mon.vm06 (mon.0) 431 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: audit 2026-03-10T12:44:36.803535+0000 mon.vm06 (mon.0) 432 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 bash[17497]: audit 2026-03-10T12:44:36.803535+0000 mon.vm06 (mon.0) 432 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:37.488 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:37 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:37.986 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:37 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:38.251 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: cephadm 2026-03-10T12:44:36.804036+0000 mgr.vm06.cofomf (mgr.14193) 82 : cephadm [INF] Deploying daemon osd.3 on vm09 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: cephadm 2026-03-10T12:44:36.804036+0000 mgr.vm06.cofomf (mgr.14193) 82 : cephadm [INF] Deploying daemon osd.3 on vm09 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:37.443850+0000 mon.vm06 (mon.0) 433 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:37.443850+0000 mon.vm06 (mon.0) 433 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:37.448630+0000 mon.vm06 (mon.0) 434 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:37.448630+0000 mon.vm06 (mon.0) 434 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:37.449311+0000 mon.vm06 (mon.0) 435 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:37.449311+0000 mon.vm06 (mon.0) 435 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:37.450349+0000 mon.vm06 (mon.0) 436 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:37.450349+0000 mon.vm06 (mon.0) 436 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: cephadm 2026-03-10T12:44:37.451099+0000 mgr.vm06.cofomf (mgr.14193) 83 : cephadm [INF] Deploying daemon osd.2 on vm06 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: cephadm 2026-03-10T12:44:37.451099+0000 mgr.vm06.cofomf (mgr.14193) 83 : cephadm [INF] Deploying daemon osd.2 on vm06 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:38.198084+0000 mon.vm06 (mon.0) 437 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:38.198084+0000 mon.vm06 (mon.0) 437 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:38.206201+0000 mon.vm06 (mon.0) 438 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:38.206201+0000 mon.vm06 (mon.0) 438 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:38.206961+0000 mon.vm06 (mon.0) 439 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:38.206961+0000 mon.vm06 (mon.0) 439 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:38.207497+0000 mon.vm06 (mon.0) 440 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:38.524 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:38 vm09 bash[21409]: audit 2026-03-10T12:44:38.207497+0000 mon.vm06 (mon.0) 440 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: cephadm 2026-03-10T12:44:36.804036+0000 mgr.vm06.cofomf (mgr.14193) 82 : cephadm [INF] Deploying daemon osd.3 on vm09 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: cephadm 2026-03-10T12:44:36.804036+0000 mgr.vm06.cofomf (mgr.14193) 82 : cephadm [INF] Deploying daemon osd.3 on vm09 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:37.443850+0000 mon.vm06 (mon.0) 433 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:37.443850+0000 mon.vm06 (mon.0) 433 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:37.448630+0000 mon.vm06 (mon.0) 434 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:37.448630+0000 mon.vm06 (mon.0) 434 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:37.449311+0000 mon.vm06 (mon.0) 435 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:37.449311+0000 mon.vm06 (mon.0) 435 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:37.450349+0000 mon.vm06 (mon.0) 436 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:37.450349+0000 mon.vm06 (mon.0) 436 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: cephadm 2026-03-10T12:44:37.451099+0000 mgr.vm06.cofomf (mgr.14193) 83 : cephadm [INF] Deploying daemon osd.2 on vm06 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: cephadm 2026-03-10T12:44:37.451099+0000 mgr.vm06.cofomf (mgr.14193) 83 : cephadm [INF] Deploying daemon osd.2 on vm06 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:38.198084+0000 mon.vm06 (mon.0) 437 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:38.198084+0000 mon.vm06 (mon.0) 437 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:38.206201+0000 mon.vm06 (mon.0) 438 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:38.206201+0000 mon.vm06 (mon.0) 438 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:38.206961+0000 mon.vm06 (mon.0) 439 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:38.206961+0000 mon.vm06 (mon.0) 439 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:38.207497+0000 mon.vm06 (mon.0) 440 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 bash[17497]: audit 2026-03-10T12:44:38.207497+0000 mon.vm06 (mon.0) 440 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:38.606 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:38.895 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:38 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:39.414 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 bash[21409]: cluster 2026-03-10T12:44:37.939033+0000 mgr.vm06.cofomf (mgr.14193) 84 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 bash[21409]: cluster 2026-03-10T12:44:37.939033+0000 mgr.vm06.cofomf (mgr.14193) 84 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 bash[21409]: cephadm 2026-03-10T12:44:38.207972+0000 mgr.vm06.cofomf (mgr.14193) 85 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 bash[21409]: cephadm 2026-03-10T12:44:38.207972+0000 mgr.vm06.cofomf (mgr.14193) 85 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 bash[21409]: audit 2026-03-10T12:44:38.828266+0000 mon.vm06 (mon.0) 441 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 bash[21409]: audit 2026-03-10T12:44:38.828266+0000 mon.vm06 (mon.0) 441 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 bash[21409]: audit 2026-03-10T12:44:38.839603+0000 mon.vm06 (mon.0) 442 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 bash[21409]: audit 2026-03-10T12:44:38.839603+0000 mon.vm06 (mon.0) 442 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 bash[21409]: audit 2026-03-10T12:44:38.840395+0000 mon.vm06 (mon.0) 443 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 bash[21409]: audit 2026-03-10T12:44:38.840395+0000 mon.vm06 (mon.0) 443 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 bash[21409]: audit 2026-03-10T12:44:38.840934+0000 mon.vm06 (mon.0) 444 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 bash[21409]: audit 2026-03-10T12:44:38.840934+0000 mon.vm06 (mon.0) 444 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:39.696 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:39 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:39.785 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 bash[17497]: cluster 2026-03-10T12:44:37.939033+0000 mgr.vm06.cofomf (mgr.14193) 84 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:39.785 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 bash[17497]: cluster 2026-03-10T12:44:37.939033+0000 mgr.vm06.cofomf (mgr.14193) 84 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:39.785 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 bash[17497]: cephadm 2026-03-10T12:44:38.207972+0000 mgr.vm06.cofomf (mgr.14193) 85 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-10T12:44:39.786 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 bash[17497]: cephadm 2026-03-10T12:44:38.207972+0000 mgr.vm06.cofomf (mgr.14193) 85 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-10T12:44:39.786 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 bash[17497]: audit 2026-03-10T12:44:38.828266+0000 mon.vm06 (mon.0) 441 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:39.786 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 bash[17497]: audit 2026-03-10T12:44:38.828266+0000 mon.vm06 (mon.0) 441 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:39.786 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 bash[17497]: audit 2026-03-10T12:44:38.839603+0000 mon.vm06 (mon.0) 442 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:39.786 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 bash[17497]: audit 2026-03-10T12:44:38.839603+0000 mon.vm06 (mon.0) 442 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:39.786 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 bash[17497]: audit 2026-03-10T12:44:38.840395+0000 mon.vm06 (mon.0) 443 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T12:44:39.786 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 bash[17497]: audit 2026-03-10T12:44:38.840395+0000 mon.vm06 (mon.0) 443 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T12:44:39.786 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 bash[17497]: audit 2026-03-10T12:44:38.840934+0000 mon.vm06 (mon.0) 444 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:39.786 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 bash[17497]: audit 2026-03-10T12:44:38.840934+0000 mon.vm06 (mon.0) 444 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:40.029 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:39 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:40.303 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: cephadm 2026-03-10T12:44:38.841361+0000 mgr.vm06.cofomf (mgr.14193) 86 : cephadm [INF] Deploying daemon osd.5 on vm06 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: cephadm 2026-03-10T12:44:38.841361+0000 mgr.vm06.cofomf (mgr.14193) 86 : cephadm [INF] Deploying daemon osd.5 on vm06 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:39.666580+0000 mon.vm06 (mon.0) 445 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:39.666580+0000 mon.vm06 (mon.0) 445 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:39.671404+0000 mon.vm06 (mon.0) 446 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:39.671404+0000 mon.vm06 (mon.0) 446 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:39.672115+0000 mon.vm06 (mon.0) 447 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:39.672115+0000 mon.vm06 (mon.0) 447 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:39.672728+0000 mon.vm06 (mon.0) 448 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:39.672728+0000 mon.vm06 (mon.0) 448 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: cephadm 2026-03-10T12:44:39.673253+0000 mgr.vm06.cofomf (mgr.14193) 87 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: cephadm 2026-03-10T12:44:39.673253+0000 mgr.vm06.cofomf (mgr.14193) 87 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:40.307386+0000 mon.vm06 (mon.0) 449 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:40.307386+0000 mon.vm06 (mon.0) 449 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:40.314180+0000 mon.vm06 (mon.0) 450 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:40.314180+0000 mon.vm06 (mon.0) 450 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:40.317320+0000 mon.vm06 (mon.0) 451 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:40.317320+0000 mon.vm06 (mon.0) 451 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:40.318594+0000 mon.vm06 (mon.0) 452 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:40.318594+0000 mon.vm06 (mon.0) 452 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:40.502075+0000 mon.vm09 (mon.1) 10 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/920523896,v1:192.168.123.109:6801/920523896]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:40.502075+0000 mon.vm09 (mon.1) 10 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/920523896,v1:192.168.123.109:6801/920523896]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:40.503217+0000 mon.vm06 (mon.0) 453 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T12:44:40.804 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 bash[21409]: audit 2026-03-10T12:44:40.503217+0000 mon.vm06 (mon.0) 453 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: cephadm 2026-03-10T12:44:38.841361+0000 mgr.vm06.cofomf (mgr.14193) 86 : cephadm [INF] Deploying daemon osd.5 on vm06 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: cephadm 2026-03-10T12:44:38.841361+0000 mgr.vm06.cofomf (mgr.14193) 86 : cephadm [INF] Deploying daemon osd.5 on vm06 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:39.666580+0000 mon.vm06 (mon.0) 445 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:39.666580+0000 mon.vm06 (mon.0) 445 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:39.671404+0000 mon.vm06 (mon.0) 446 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:39.671404+0000 mon.vm06 (mon.0) 446 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:39.672115+0000 mon.vm06 (mon.0) 447 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:39.672115+0000 mon.vm06 (mon.0) 447 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:39.672728+0000 mon.vm06 (mon.0) 448 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:39.672728+0000 mon.vm06 (mon.0) 448 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: cephadm 2026-03-10T12:44:39.673253+0000 mgr.vm06.cofomf (mgr.14193) 87 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: cephadm 2026-03-10T12:44:39.673253+0000 mgr.vm06.cofomf (mgr.14193) 87 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:40.307386+0000 mon.vm06 (mon.0) 449 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:40.307386+0000 mon.vm06 (mon.0) 449 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:40.314180+0000 mon.vm06 (mon.0) 450 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:40.314180+0000 mon.vm06 (mon.0) 450 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:40.317320+0000 mon.vm06 (mon.0) 451 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:40.317320+0000 mon.vm06 (mon.0) 451 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:40.318594+0000 mon.vm06 (mon.0) 452 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:40.318594+0000 mon.vm06 (mon.0) 452 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:40.502075+0000 mon.vm09 (mon.1) 10 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/920523896,v1:192.168.123.109:6801/920523896]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:40.502075+0000 mon.vm09 (mon.1) 10 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/920523896,v1:192.168.123.109:6801/920523896]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:40.503217+0000 mon.vm06 (mon.0) 453 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T12:44:40.831 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:40 vm06 bash[17497]: audit 2026-03-10T12:44:40.503217+0000 mon.vm06 (mon.0) 453 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T12:44:41.062 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:40 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:41.338 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: cluster 2026-03-10T12:44:39.939217+0000 mgr.vm06.cofomf (mgr.14193) 88 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: cluster 2026-03-10T12:44:39.939217+0000 mgr.vm06.cofomf (mgr.14193) 88 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: cephadm 2026-03-10T12:44:40.319395+0000 mgr.vm06.cofomf (mgr.14193) 89 : cephadm [INF] Deploying daemon osd.7 on vm06 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: cephadm 2026-03-10T12:44:40.319395+0000 mgr.vm06.cofomf (mgr.14193) 89 : cephadm [INF] Deploying daemon osd.7 on vm06 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.259641+0000 mon.vm06 (mon.0) 454 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.259641+0000 mon.vm06 (mon.0) 454 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.298083+0000 mon.vm06 (mon.0) 455 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.298083+0000 mon.vm06 (mon.0) 455 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.366875+0000 mon.vm06 (mon.0) 456 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.366875+0000 mon.vm06 (mon.0) 456 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: cluster 2026-03-10T12:44:41.369457+0000 mon.vm06 (mon.0) 457 : cluster [DBG] osdmap e14: 8 total, 0 up, 8 in 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: cluster 2026-03-10T12:44:41.369457+0000 mon.vm06 (mon.0) 457 : cluster [DBG] osdmap e14: 8 total, 0 up, 8 in 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.369660+0000 mon.vm06 (mon.0) 458 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.369660+0000 mon.vm06 (mon.0) 458 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.369805+0000 mon.vm06 (mon.0) 459 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.369805+0000 mon.vm06 (mon.0) 459 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.369948+0000 mon.vm06 (mon.0) 460 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.369948+0000 mon.vm06 (mon.0) 460 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.370266+0000 mon.vm06 (mon.0) 461 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.370266+0000 mon.vm06 (mon.0) 461 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.370323+0000 mon.vm06 (mon.0) 462 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.370323+0000 mon.vm06 (mon.0) 462 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.370367+0000 mon.vm06 (mon.0) 463 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.370367+0000 mon.vm06 (mon.0) 463 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.373072+0000 mon.vm06 (mon.0) 464 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.373072+0000 mon.vm06 (mon.0) 464 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.373168+0000 mon.vm06 (mon.0) 465 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.373168+0000 mon.vm06 (mon.0) 465 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.384287+0000 mon.vm09 (mon.1) 11 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/920523896,v1:192.168.123.109:6801/920523896]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.384287+0000 mon.vm09 (mon.1) 11 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/920523896,v1:192.168.123.109:6801/920523896]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.385430+0000 mon.vm06 (mon.0) 466 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.385430+0000 mon.vm06 (mon.0) 466 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.697205+0000 mon.vm06 (mon.0) 467 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T12:44:41.725 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 bash[17497]: audit 2026-03-10T12:44:41.697205+0000 mon.vm06 (mon.0) 467 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T12:44:41.975 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:41.975 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:41 vm06 systemd[1]: /etc/systemd/system/ceph-68e2be40-1c7e-11f1-b779-df2955349a39@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: cluster 2026-03-10T12:44:39.939217+0000 mgr.vm06.cofomf (mgr.14193) 88 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: cluster 2026-03-10T12:44:39.939217+0000 mgr.vm06.cofomf (mgr.14193) 88 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: cephadm 2026-03-10T12:44:40.319395+0000 mgr.vm06.cofomf (mgr.14193) 89 : cephadm [INF] Deploying daemon osd.7 on vm06 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: cephadm 2026-03-10T12:44:40.319395+0000 mgr.vm06.cofomf (mgr.14193) 89 : cephadm [INF] Deploying daemon osd.7 on vm06 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.259641+0000 mon.vm06 (mon.0) 454 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.259641+0000 mon.vm06 (mon.0) 454 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.298083+0000 mon.vm06 (mon.0) 455 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.298083+0000 mon.vm06 (mon.0) 455 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.366875+0000 mon.vm06 (mon.0) 456 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.366875+0000 mon.vm06 (mon.0) 456 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: cluster 2026-03-10T12:44:41.369457+0000 mon.vm06 (mon.0) 457 : cluster [DBG] osdmap e14: 8 total, 0 up, 8 in 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: cluster 2026-03-10T12:44:41.369457+0000 mon.vm06 (mon.0) 457 : cluster [DBG] osdmap e14: 8 total, 0 up, 8 in 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.369660+0000 mon.vm06 (mon.0) 458 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.369660+0000 mon.vm06 (mon.0) 458 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.369805+0000 mon.vm06 (mon.0) 459 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.369805+0000 mon.vm06 (mon.0) 459 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.369948+0000 mon.vm06 (mon.0) 460 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.369948+0000 mon.vm06 (mon.0) 460 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.370266+0000 mon.vm06 (mon.0) 461 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.370266+0000 mon.vm06 (mon.0) 461 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.370323+0000 mon.vm06 (mon.0) 462 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.370323+0000 mon.vm06 (mon.0) 462 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.370367+0000 mon.vm06 (mon.0) 463 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.370367+0000 mon.vm06 (mon.0) 463 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.373072+0000 mon.vm06 (mon.0) 464 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.373072+0000 mon.vm06 (mon.0) 464 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.373168+0000 mon.vm06 (mon.0) 465 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.373168+0000 mon.vm06 (mon.0) 465 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.384287+0000 mon.vm09 (mon.1) 11 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/920523896,v1:192.168.123.109:6801/920523896]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.384287+0000 mon.vm09 (mon.1) 11 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/920523896,v1:192.168.123.109:6801/920523896]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.385430+0000 mon.vm06 (mon.0) 466 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.385430+0000 mon.vm06 (mon.0) 466 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.697205+0000 mon.vm06 (mon.0) 467 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T12:44:42.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:41 vm09 bash[21409]: audit 2026-03-10T12:44:41.697205+0000 mon.vm06 (mon.0) 467 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T12:44:43.124 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: cluster 2026-03-10T12:44:41.939423+0000 mgr.vm06.cofomf (mgr.14193) 90 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: cluster 2026-03-10T12:44:41.939423+0000 mgr.vm06.cofomf (mgr.14193) 90 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.087667+0000 mon.vm06 (mon.0) 468 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.087667+0000 mon.vm06 (mon.0) 468 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.099153+0000 mon.vm06 (mon.0) 469 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.099153+0000 mon.vm06 (mon.0) 469 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.369522+0000 mon.vm06 (mon.0) 470 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.369522+0000 mon.vm06 (mon.0) 470 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.369671+0000 mon.vm06 (mon.0) 471 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.369671+0000 mon.vm06 (mon.0) 471 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: cluster 2026-03-10T12:44:42.372424+0000 mon.vm06 (mon.0) 472 : cluster [DBG] osdmap e15: 8 total, 0 up, 8 in 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: cluster 2026-03-10T12:44:42.372424+0000 mon.vm06 (mon.0) 472 : cluster [DBG] osdmap e15: 8 total, 0 up, 8 in 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.372597+0000 mon.vm06 (mon.0) 473 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.372597+0000 mon.vm06 (mon.0) 473 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.372658+0000 mon.vm06 (mon.0) 474 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.372658+0000 mon.vm06 (mon.0) 474 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.372730+0000 mon.vm06 (mon.0) 475 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.372730+0000 mon.vm06 (mon.0) 475 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.372764+0000 mon.vm06 (mon.0) 476 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.372764+0000 mon.vm06 (mon.0) 476 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.372796+0000 mon.vm06 (mon.0) 477 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.372796+0000 mon.vm06 (mon.0) 477 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.372919+0000 mon.vm06 (mon.0) 478 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.372919+0000 mon.vm06 (mon.0) 478 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.373196+0000 mon.vm06 (mon.0) 479 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.373196+0000 mon.vm06 (mon.0) 479 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.373252+0000 mon.vm06 (mon.0) 480 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.373252+0000 mon.vm06 (mon.0) 480 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.373282+0000 mon.vm06 (mon.0) 481 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.373282+0000 mon.vm06 (mon.0) 481 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.376014+0000 mon.vm06 (mon.0) 482 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.376014+0000 mon.vm06 (mon.0) 482 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.519811+0000 mon.vm09 (mon.1) 12 : audit [INF] from='osd.3 [v2:192.168.123.109:6808/1289218013,v1:192.168.123.109:6809/1289218013]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.519811+0000 mon.vm09 (mon.1) 12 : audit [INF] from='osd.3 [v2:192.168.123.109:6808/1289218013,v1:192.168.123.109:6809/1289218013]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.520796+0000 mon.vm06 (mon.0) 483 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.520796+0000 mon.vm06 (mon.0) 483 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.871265+0000 mon.vm09 (mon.1) 13 : audit [INF] from='osd.2 [v2:192.168.123.106:6810/1494747690,v1:192.168.123.106:6811/1494747690]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.871265+0000 mon.vm09 (mon.1) 13 : audit [INF] from='osd.2 [v2:192.168.123.106:6810/1494747690,v1:192.168.123.106:6811/1494747690]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.872209+0000 mon.vm06 (mon.0) 484 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T12:44:43.125 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:43 vm06 bash[17497]: audit 2026-03-10T12:44:42.872209+0000 mon.vm06 (mon.0) 484 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: cluster 2026-03-10T12:44:41.939423+0000 mgr.vm06.cofomf (mgr.14193) 90 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: cluster 2026-03-10T12:44:41.939423+0000 mgr.vm06.cofomf (mgr.14193) 90 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.087667+0000 mon.vm06 (mon.0) 468 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.087667+0000 mon.vm06 (mon.0) 468 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.099153+0000 mon.vm06 (mon.0) 469 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.099153+0000 mon.vm06 (mon.0) 469 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.369522+0000 mon.vm06 (mon.0) 470 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.369522+0000 mon.vm06 (mon.0) 470 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.369671+0000 mon.vm06 (mon.0) 471 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.369671+0000 mon.vm06 (mon.0) 471 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: cluster 2026-03-10T12:44:42.372424+0000 mon.vm06 (mon.0) 472 : cluster [DBG] osdmap e15: 8 total, 0 up, 8 in 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: cluster 2026-03-10T12:44:42.372424+0000 mon.vm06 (mon.0) 472 : cluster [DBG] osdmap e15: 8 total, 0 up, 8 in 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.372597+0000 mon.vm06 (mon.0) 473 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.372597+0000 mon.vm06 (mon.0) 473 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.372658+0000 mon.vm06 (mon.0) 474 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.372658+0000 mon.vm06 (mon.0) 474 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.372730+0000 mon.vm06 (mon.0) 475 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.372730+0000 mon.vm06 (mon.0) 475 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.372764+0000 mon.vm06 (mon.0) 476 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.372764+0000 mon.vm06 (mon.0) 476 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.372796+0000 mon.vm06 (mon.0) 477 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.372796+0000 mon.vm06 (mon.0) 477 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.372919+0000 mon.vm06 (mon.0) 478 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.372919+0000 mon.vm06 (mon.0) 478 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.373196+0000 mon.vm06 (mon.0) 479 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.373196+0000 mon.vm06 (mon.0) 479 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.373252+0000 mon.vm06 (mon.0) 480 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.373252+0000 mon.vm06 (mon.0) 480 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.373282+0000 mon.vm06 (mon.0) 481 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.373282+0000 mon.vm06 (mon.0) 481 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.376014+0000 mon.vm06 (mon.0) 482 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.376014+0000 mon.vm06 (mon.0) 482 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.519811+0000 mon.vm09 (mon.1) 12 : audit [INF] from='osd.3 [v2:192.168.123.109:6808/1289218013,v1:192.168.123.109:6809/1289218013]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.519811+0000 mon.vm09 (mon.1) 12 : audit [INF] from='osd.3 [v2:192.168.123.109:6808/1289218013,v1:192.168.123.109:6809/1289218013]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.520796+0000 mon.vm06 (mon.0) 483 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.520796+0000 mon.vm06 (mon.0) 483 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.871265+0000 mon.vm09 (mon.1) 13 : audit [INF] from='osd.2 [v2:192.168.123.106:6810/1494747690,v1:192.168.123.106:6811/1494747690]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.871265+0000 mon.vm09 (mon.1) 13 : audit [INF] from='osd.2 [v2:192.168.123.106:6810/1494747690,v1:192.168.123.106:6811/1494747690]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T12:44:43.401 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.872209+0000 mon.vm06 (mon.0) 484 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T12:44:43.402 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:43 vm09 bash[21409]: audit 2026-03-10T12:44:42.872209+0000 mon.vm06 (mon.0) 484 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T12:44:43.965 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:44:44.274 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:44:44.359 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":16,"num_osds":8,"num_up_osds":1,"osd_up_since":1773146683,"num_in_osds":8,"osd_in_since":1773146667,"num_remapped_pgs":0} 2026-03-10T12:44:44.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: cluster 2026-03-10T12:44:41.447006+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:44.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: cluster 2026-03-10T12:44:41.447006+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:44.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: cluster 2026-03-10T12:44:41.447079+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:44.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: cluster 2026-03-10T12:44:41.447079+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.372581+0000 mon.vm06 (mon.0) 485 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.372581+0000 mon.vm06 (mon.0) 485 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.372735+0000 mon.vm06 (mon.0) 486 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.372735+0000 mon.vm06 (mon.0) 486 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.372839+0000 mon.vm06 (mon.0) 487 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.372839+0000 mon.vm06 (mon.0) 487 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: cluster 2026-03-10T12:44:43.377452+0000 mon.vm06 (mon.0) 488 : cluster [INF] osd.0 [v2:192.168.123.109:6800/920523896,v1:192.168.123.109:6801/920523896] boot 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: cluster 2026-03-10T12:44:43.377452+0000 mon.vm06 (mon.0) 488 : cluster [INF] osd.0 [v2:192.168.123.109:6800/920523896,v1:192.168.123.109:6801/920523896] boot 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: cluster 2026-03-10T12:44:43.377572+0000 mon.vm06 (mon.0) 489 : cluster [DBG] osdmap e16: 8 total, 1 up, 8 in 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: cluster 2026-03-10T12:44:43.377572+0000 mon.vm06 (mon.0) 489 : cluster [DBG] osdmap e16: 8 total, 1 up, 8 in 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378038+0000 mon.vm09 (mon.1) 14 : audit [INF] from='osd.3 [v2:192.168.123.109:6808/1289218013,v1:192.168.123.109:6809/1289218013]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378038+0000 mon.vm09 (mon.1) 14 : audit [INF] from='osd.3 [v2:192.168.123.109:6808/1289218013,v1:192.168.123.109:6809/1289218013]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378226+0000 mon.vm06 (mon.0) 490 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378226+0000 mon.vm06 (mon.0) 490 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378344+0000 mon.vm06 (mon.0) 491 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378344+0000 mon.vm06 (mon.0) 491 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378397+0000 mon.vm06 (mon.0) 492 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378397+0000 mon.vm06 (mon.0) 492 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378448+0000 mon.vm06 (mon.0) 493 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378448+0000 mon.vm06 (mon.0) 493 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378495+0000 mon.vm06 (mon.0) 494 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378495+0000 mon.vm06 (mon.0) 494 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378549+0000 mon.vm06 (mon.0) 495 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378549+0000 mon.vm06 (mon.0) 495 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378592+0000 mon.vm06 (mon.0) 496 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378592+0000 mon.vm06 (mon.0) 496 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378631+0000 mon.vm06 (mon.0) 497 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378631+0000 mon.vm06 (mon.0) 497 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378745+0000 mon.vm09 (mon.1) 15 : audit [INF] from='osd.2 [v2:192.168.123.106:6810/1494747690,v1:192.168.123.106:6811/1494747690]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.378745+0000 mon.vm09 (mon.1) 15 : audit [INF] from='osd.2 [v2:192.168.123.106:6810/1494747690,v1:192.168.123.106:6811/1494747690]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.379429+0000 mon.vm06 (mon.0) 498 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.379429+0000 mon.vm06 (mon.0) 498 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.379726+0000 mon.vm06 (mon.0) 499 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.379726+0000 mon.vm06 (mon.0) 499 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.382491+0000 mon.vm06 (mon.0) 500 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.382491+0000 mon.vm06 (mon.0) 500 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.735342+0000 mon.vm09 (mon.1) 16 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/1593991996,v1:192.168.123.109:6817/1593991996]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.735342+0000 mon.vm09 (mon.1) 16 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/1593991996,v1:192.168.123.109:6817/1593991996]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.736372+0000 mon.vm06 (mon.0) 501 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:43.736372+0000 mon.vm06 (mon.0) 501 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:44.274810+0000 mon.vm06 (mon.0) 502 : audit [DBG] from='client.? 192.168.123.106:0/595889584' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:44.274810+0000 mon.vm06 (mon.0) 502 : audit [DBG] from='client.? 192.168.123.106:0/595889584' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:44.328979+0000 mon.vm06 (mon.0) 503 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T12:44:44.598 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:44 vm06 bash[17497]: audit 2026-03-10T12:44:44.328979+0000 mon.vm06 (mon.0) 503 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: cluster 2026-03-10T12:44:41.447006+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: cluster 2026-03-10T12:44:41.447006+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: cluster 2026-03-10T12:44:41.447079+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: cluster 2026-03-10T12:44:41.447079+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.372581+0000 mon.vm06 (mon.0) 485 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.372581+0000 mon.vm06 (mon.0) 485 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.372735+0000 mon.vm06 (mon.0) 486 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.372735+0000 mon.vm06 (mon.0) 486 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.372839+0000 mon.vm06 (mon.0) 487 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.372839+0000 mon.vm06 (mon.0) 487 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: cluster 2026-03-10T12:44:43.377452+0000 mon.vm06 (mon.0) 488 : cluster [INF] osd.0 [v2:192.168.123.109:6800/920523896,v1:192.168.123.109:6801/920523896] boot 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: cluster 2026-03-10T12:44:43.377452+0000 mon.vm06 (mon.0) 488 : cluster [INF] osd.0 [v2:192.168.123.109:6800/920523896,v1:192.168.123.109:6801/920523896] boot 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: cluster 2026-03-10T12:44:43.377572+0000 mon.vm06 (mon.0) 489 : cluster [DBG] osdmap e16: 8 total, 1 up, 8 in 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: cluster 2026-03-10T12:44:43.377572+0000 mon.vm06 (mon.0) 489 : cluster [DBG] osdmap e16: 8 total, 1 up, 8 in 2026-03-10T12:44:44.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378038+0000 mon.vm09 (mon.1) 14 : audit [INF] from='osd.3 [v2:192.168.123.109:6808/1289218013,v1:192.168.123.109:6809/1289218013]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378038+0000 mon.vm09 (mon.1) 14 : audit [INF] from='osd.3 [v2:192.168.123.109:6808/1289218013,v1:192.168.123.109:6809/1289218013]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378226+0000 mon.vm06 (mon.0) 490 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378226+0000 mon.vm06 (mon.0) 490 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378344+0000 mon.vm06 (mon.0) 491 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378344+0000 mon.vm06 (mon.0) 491 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378397+0000 mon.vm06 (mon.0) 492 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378397+0000 mon.vm06 (mon.0) 492 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378448+0000 mon.vm06 (mon.0) 493 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378448+0000 mon.vm06 (mon.0) 493 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378495+0000 mon.vm06 (mon.0) 494 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378495+0000 mon.vm06 (mon.0) 494 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378549+0000 mon.vm06 (mon.0) 495 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378549+0000 mon.vm06 (mon.0) 495 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378592+0000 mon.vm06 (mon.0) 496 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378592+0000 mon.vm06 (mon.0) 496 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378631+0000 mon.vm06 (mon.0) 497 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378631+0000 mon.vm06 (mon.0) 497 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378745+0000 mon.vm09 (mon.1) 15 : audit [INF] from='osd.2 [v2:192.168.123.106:6810/1494747690,v1:192.168.123.106:6811/1494747690]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.378745+0000 mon.vm09 (mon.1) 15 : audit [INF] from='osd.2 [v2:192.168.123.106:6810/1494747690,v1:192.168.123.106:6811/1494747690]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.379429+0000 mon.vm06 (mon.0) 498 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.379429+0000 mon.vm06 (mon.0) 498 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.379726+0000 mon.vm06 (mon.0) 499 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.379726+0000 mon.vm06 (mon.0) 499 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.382491+0000 mon.vm06 (mon.0) 500 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.382491+0000 mon.vm06 (mon.0) 500 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.735342+0000 mon.vm09 (mon.1) 16 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/1593991996,v1:192.168.123.109:6817/1593991996]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.735342+0000 mon.vm09 (mon.1) 16 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/1593991996,v1:192.168.123.109:6817/1593991996]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.736372+0000 mon.vm06 (mon.0) 501 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:43.736372+0000 mon.vm06 (mon.0) 501 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:44.274810+0000 mon.vm06 (mon.0) 502 : audit [DBG] from='client.? 192.168.123.106:0/595889584' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:44.274810+0000 mon.vm06 (mon.0) 502 : audit [DBG] from='client.? 192.168.123.106:0/595889584' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:44.328979+0000 mon.vm06 (mon.0) 503 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T12:44:44.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:44 vm09 bash[21409]: audit 2026-03-10T12:44:44.328979+0000 mon.vm06 (mon.0) 503 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T12:44:45.360 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd stat -f json 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:42.705046+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:42.705046+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:42.705100+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:42.705100+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:43.939587+0000 mgr.vm06.cofomf (mgr.14193) 91 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:43.939587+0000 mgr.vm06.cofomf (mgr.14193) 91 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.382154+0000 mon.vm06 (mon.0) 504 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.382154+0000 mon.vm06 (mon.0) 504 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.382558+0000 mon.vm06 (mon.0) 505 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.382558+0000 mon.vm06 (mon.0) 505 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.382854+0000 mon.vm06 (mon.0) 506 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.382854+0000 mon.vm06 (mon.0) 506 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.382987+0000 mon.vm06 (mon.0) 507 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.382987+0000 mon.vm06 (mon.0) 507 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.383121+0000 mon.vm06 (mon.0) 508 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.383121+0000 mon.vm06 (mon.0) 508 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:44.385989+0000 mon.vm06 (mon.0) 509 : cluster [DBG] osdmap e17: 8 total, 1 up, 8 in 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:44.385989+0000 mon.vm06 (mon.0) 509 : cluster [DBG] osdmap e17: 8 total, 1 up, 8 in 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.387534+0000 mon.vm09 (mon.1) 17 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/1593991996,v1:192.168.123.109:6817/1593991996]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.387534+0000 mon.vm09 (mon.1) 17 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/1593991996,v1:192.168.123.109:6817/1593991996]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.389575+0000 mon.vm09 (mon.1) 18 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/176302434,v1:192.168.123.109:6825/176302434]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.389575+0000 mon.vm09 (mon.1) 18 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/176302434,v1:192.168.123.109:6825/176302434]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T12:44:45.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.394207+0000 mon.vm06 (mon.0) 510 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.394207+0000 mon.vm06 (mon.0) 510 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.394735+0000 mon.vm06 (mon.0) 511 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.394735+0000 mon.vm06 (mon.0) 511 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.395107+0000 mon.vm06 (mon.0) 512 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.395107+0000 mon.vm06 (mon.0) 512 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.395432+0000 mon.vm06 (mon.0) 513 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.395432+0000 mon.vm06 (mon.0) 513 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.396091+0000 mon.vm06 (mon.0) 514 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.396091+0000 mon.vm06 (mon.0) 514 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.396418+0000 mon.vm06 (mon.0) 515 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.396418+0000 mon.vm06 (mon.0) 515 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.396780+0000 mon.vm06 (mon.0) 516 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.396780+0000 mon.vm06 (mon.0) 516 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.397143+0000 mon.vm06 (mon.0) 517 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.397143+0000 mon.vm06 (mon.0) 517 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.397485+0000 mon.vm06 (mon.0) 518 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.397485+0000 mon.vm06 (mon.0) 518 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.397971+0000 mon.vm06 (mon.0) 519 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.397971+0000 mon.vm06 (mon.0) 519 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.656228+0000 mon.vm06 (mon.0) 520 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:44.656228+0000 mon.vm06 (mon.0) 520 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.381852+0000 mon.vm06 (mon.0) 521 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.381852+0000 mon.vm06 (mon.0) 521 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.387468+0000 mon.vm06 (mon.0) 522 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.387468+0000 mon.vm06 (mon.0) 522 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.387545+0000 mon.vm06 (mon.0) 523 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.387545+0000 mon.vm06 (mon.0) 523 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.387675+0000 mon.vm06 (mon.0) 524 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.387675+0000 mon.vm06 (mon.0) 524 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:45.394910+0000 mon.vm06 (mon.0) 525 : cluster [INF] osd.3 [v2:192.168.123.109:6808/1289218013,v1:192.168.123.109:6809/1289218013] boot 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:45.394910+0000 mon.vm06 (mon.0) 525 : cluster [INF] osd.3 [v2:192.168.123.109:6808/1289218013,v1:192.168.123.109:6809/1289218013] boot 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:45.395008+0000 mon.vm06 (mon.0) 526 : cluster [INF] osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414] boot 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:45.395008+0000 mon.vm06 (mon.0) 526 : cluster [INF] osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414] boot 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:45.395029+0000 mon.vm06 (mon.0) 527 : cluster [DBG] osdmap e18: 8 total, 3 up, 8 in 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: cluster 2026-03-10T12:44:45.395029+0000 mon.vm06 (mon.0) 527 : cluster [DBG] osdmap e18: 8 total, 3 up, 8 in 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.397201+0000 mon.vm09 (mon.1) 19 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/176302434,v1:192.168.123.109:6825/176302434]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.397201+0000 mon.vm09 (mon.1) 19 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/176302434,v1:192.168.123.109:6825/176302434]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.398806+0000 mon.vm06 (mon.0) 528 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.398806+0000 mon.vm06 (mon.0) 528 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.399497+0000 mon.vm06 (mon.0) 529 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.399497+0000 mon.vm06 (mon.0) 529 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.400217+0000 mon.vm06 (mon.0) 530 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.400217+0000 mon.vm06 (mon.0) 530 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.400760+0000 mon.vm06 (mon.0) 531 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.400760+0000 mon.vm06 (mon.0) 531 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.401408+0000 mon.vm06 (mon.0) 532 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:45.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:45 vm06 bash[17497]: audit 2026-03-10T12:44:45.401408+0000 mon.vm06 (mon.0) 532 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:45.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:42.705046+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:45.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:42.705046+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:45.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:42.705100+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:45.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:42.705100+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:45.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:43.939587+0000 mgr.vm06.cofomf (mgr.14193) 91 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:45.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:43.939587+0000 mgr.vm06.cofomf (mgr.14193) 91 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:44:45.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.382154+0000 mon.vm06 (mon.0) 504 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.382154+0000 mon.vm06 (mon.0) 504 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.382558+0000 mon.vm06 (mon.0) 505 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:45.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.382558+0000 mon.vm06 (mon.0) 505 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.382854+0000 mon.vm06 (mon.0) 506 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.382854+0000 mon.vm06 (mon.0) 506 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.382987+0000 mon.vm06 (mon.0) 507 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.382987+0000 mon.vm06 (mon.0) 507 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.383121+0000 mon.vm06 (mon.0) 508 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.383121+0000 mon.vm06 (mon.0) 508 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:44.385989+0000 mon.vm06 (mon.0) 509 : cluster [DBG] osdmap e17: 8 total, 1 up, 8 in 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:44.385989+0000 mon.vm06 (mon.0) 509 : cluster [DBG] osdmap e17: 8 total, 1 up, 8 in 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.387534+0000 mon.vm09 (mon.1) 17 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/1593991996,v1:192.168.123.109:6817/1593991996]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.387534+0000 mon.vm09 (mon.1) 17 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/1593991996,v1:192.168.123.109:6817/1593991996]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.389575+0000 mon.vm09 (mon.1) 18 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/176302434,v1:192.168.123.109:6825/176302434]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.389575+0000 mon.vm09 (mon.1) 18 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/176302434,v1:192.168.123.109:6825/176302434]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.394207+0000 mon.vm06 (mon.0) 510 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.394207+0000 mon.vm06 (mon.0) 510 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.394735+0000 mon.vm06 (mon.0) 511 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.394735+0000 mon.vm06 (mon.0) 511 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.395107+0000 mon.vm06 (mon.0) 512 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.395107+0000 mon.vm06 (mon.0) 512 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.395432+0000 mon.vm06 (mon.0) 513 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.395432+0000 mon.vm06 (mon.0) 513 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.396091+0000 mon.vm06 (mon.0) 514 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.396091+0000 mon.vm06 (mon.0) 514 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.396418+0000 mon.vm06 (mon.0) 515 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.396418+0000 mon.vm06 (mon.0) 515 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.396780+0000 mon.vm06 (mon.0) 516 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.396780+0000 mon.vm06 (mon.0) 516 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.397143+0000 mon.vm06 (mon.0) 517 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.397143+0000 mon.vm06 (mon.0) 517 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.397485+0000 mon.vm06 (mon.0) 518 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.397485+0000 mon.vm06 (mon.0) 518 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.397971+0000 mon.vm06 (mon.0) 519 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.397971+0000 mon.vm06 (mon.0) 519 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.656228+0000 mon.vm06 (mon.0) 520 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:44.656228+0000 mon.vm06 (mon.0) 520 : audit [INF] from='osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414]' entity='osd.1' 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.381852+0000 mon.vm06 (mon.0) 521 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.381852+0000 mon.vm06 (mon.0) 521 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.387468+0000 mon.vm06 (mon.0) 522 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.387468+0000 mon.vm06 (mon.0) 522 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.387545+0000 mon.vm06 (mon.0) 523 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.387545+0000 mon.vm06 (mon.0) 523 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.387675+0000 mon.vm06 (mon.0) 524 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.387675+0000 mon.vm06 (mon.0) 524 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:45.394910+0000 mon.vm06 (mon.0) 525 : cluster [INF] osd.3 [v2:192.168.123.109:6808/1289218013,v1:192.168.123.109:6809/1289218013] boot 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:45.394910+0000 mon.vm06 (mon.0) 525 : cluster [INF] osd.3 [v2:192.168.123.109:6808/1289218013,v1:192.168.123.109:6809/1289218013] boot 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:45.395008+0000 mon.vm06 (mon.0) 526 : cluster [INF] osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414] boot 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:45.395008+0000 mon.vm06 (mon.0) 526 : cluster [INF] osd.1 [v2:192.168.123.106:6802/667081414,v1:192.168.123.106:6803/667081414] boot 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:45.395029+0000 mon.vm06 (mon.0) 527 : cluster [DBG] osdmap e18: 8 total, 3 up, 8 in 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: cluster 2026-03-10T12:44:45.395029+0000 mon.vm06 (mon.0) 527 : cluster [DBG] osdmap e18: 8 total, 3 up, 8 in 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.397201+0000 mon.vm09 (mon.1) 19 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/176302434,v1:192.168.123.109:6825/176302434]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.397201+0000 mon.vm09 (mon.1) 19 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/176302434,v1:192.168.123.109:6825/176302434]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.398806+0000 mon.vm06 (mon.0) 528 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.398806+0000 mon.vm06 (mon.0) 528 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.399497+0000 mon.vm06 (mon.0) 529 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.399497+0000 mon.vm06 (mon.0) 529 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.400217+0000 mon.vm06 (mon.0) 530 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.400217+0000 mon.vm06 (mon.0) 530 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.400760+0000 mon.vm06 (mon.0) 531 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:45.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.400760+0000 mon.vm06 (mon.0) 531 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:45.861 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.401408+0000 mon.vm06 (mon.0) 532 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:45.861 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:45 vm09 bash[21409]: audit 2026-03-10T12:44:45.401408+0000 mon.vm06 (mon.0) 532 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.540 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: cluster 2026-03-10T12:44:43.492055+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:46.540 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: cluster 2026-03-10T12:44:43.492055+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:46.540 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: cluster 2026-03-10T12:44:43.492127+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:46.540 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: cluster 2026-03-10T12:44:43.492127+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: cluster 2026-03-10T12:44:43.914873+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: cluster 2026-03-10T12:44:43.914873+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: cluster 2026-03-10T12:44:43.914939+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: cluster 2026-03-10T12:44:43.914939+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.401773+0000 mon.vm06 (mon.0) 533 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.401773+0000 mon.vm06 (mon.0) 533 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.420602+0000 mon.vm06 (mon.0) 534 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.420602+0000 mon.vm06 (mon.0) 534 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.421040+0000 mon.vm06 (mon.0) 535 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.421040+0000 mon.vm06 (mon.0) 535 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.421558+0000 mon.vm06 (mon.0) 536 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.421558+0000 mon.vm06 (mon.0) 536 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.421783+0000 mon.vm06 (mon.0) 537 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.421783+0000 mon.vm06 (mon.0) 537 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.422021+0000 mon.vm06 (mon.0) 538 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.422021+0000 mon.vm06 (mon.0) 538 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.539405+0000 mon.vm06 (mon.0) 539 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.539405+0000 mon.vm06 (mon.0) 539 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.942924+0000 mon.vm06 (mon.0) 540 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.942924+0000 mon.vm06 (mon.0) 540 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.978818+0000 mon.vm06 (mon.0) 541 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:45.978818+0000 mon.vm06 (mon.0) 541 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.400904+0000 mon.vm06 (mon.0) 542 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.400904+0000 mon.vm06 (mon.0) 542 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.404436+0000 mon.vm06 (mon.0) 543 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.404436+0000 mon.vm06 (mon.0) 543 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.409217+0000 mon.vm06 (mon.0) 544 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.409217+0000 mon.vm06 (mon.0) 544 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.409279+0000 mon.vm06 (mon.0) 545 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.409279+0000 mon.vm06 (mon.0) 545 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.409312+0000 mon.vm06 (mon.0) 546 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.409312+0000 mon.vm06 (mon.0) 546 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: cluster 2026-03-10T12:44:46.419235+0000 mon.vm06 (mon.0) 547 : cluster [INF] osd.2 [v2:192.168.123.106:6810/1494747690,v1:192.168.123.106:6811/1494747690] boot 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: cluster 2026-03-10T12:44:46.419235+0000 mon.vm06 (mon.0) 547 : cluster [INF] osd.2 [v2:192.168.123.106:6810/1494747690,v1:192.168.123.106:6811/1494747690] boot 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: cluster 2026-03-10T12:44:46.419254+0000 mon.vm06 (mon.0) 548 : cluster [DBG] osdmap e19: 8 total, 4 up, 8 in 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: cluster 2026-03-10T12:44:46.419254+0000 mon.vm06 (mon.0) 548 : cluster [DBG] osdmap e19: 8 total, 4 up, 8 in 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.420011+0000 mon.vm06 (mon.0) 549 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.420011+0000 mon.vm06 (mon.0) 549 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.420118+0000 mon.vm06 (mon.0) 550 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.420118+0000 mon.vm06 (mon.0) 550 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.420197+0000 mon.vm06 (mon.0) 551 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.420197+0000 mon.vm06 (mon.0) 551 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.420233+0000 mon.vm06 (mon.0) 552 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.420233+0000 mon.vm06 (mon.0) 552 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.420266+0000 mon.vm06 (mon.0) 553 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.420266+0000 mon.vm06 (mon.0) 553 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.420296+0000 mon.vm06 (mon.0) 554 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.420296+0000 mon.vm06 (mon.0) 554 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.421228+0000 mon.vm06 (mon.0) 555 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.421228+0000 mon.vm06 (mon.0) 555 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.421667+0000 mon.vm06 (mon.0) 556 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.421667+0000 mon.vm06 (mon.0) 556 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.421783+0000 mon.vm06 (mon.0) 557 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.541 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:46 vm06 bash[17497]: audit 2026-03-10T12:44:46.421783+0000 mon.vm06 (mon.0) 557 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: cluster 2026-03-10T12:44:43.492055+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:46.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: cluster 2026-03-10T12:44:43.492055+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:46.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: cluster 2026-03-10T12:44:43.492127+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:46.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: cluster 2026-03-10T12:44:43.492127+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:46.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: cluster 2026-03-10T12:44:43.914873+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:46.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: cluster 2026-03-10T12:44:43.914873+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:46.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: cluster 2026-03-10T12:44:43.914939+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:46.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: cluster 2026-03-10T12:44:43.914939+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:46.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.401773+0000 mon.vm06 (mon.0) 533 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.401773+0000 mon.vm06 (mon.0) 533 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.420602+0000 mon.vm06 (mon.0) 534 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:46.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.420602+0000 mon.vm06 (mon.0) 534 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.421040+0000 mon.vm06 (mon.0) 535 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.421040+0000 mon.vm06 (mon.0) 535 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.421558+0000 mon.vm06 (mon.0) 536 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.421558+0000 mon.vm06 (mon.0) 536 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.421783+0000 mon.vm06 (mon.0) 537 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.421783+0000 mon.vm06 (mon.0) 537 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.422021+0000 mon.vm06 (mon.0) 538 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.422021+0000 mon.vm06 (mon.0) 538 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.539405+0000 mon.vm06 (mon.0) 539 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.539405+0000 mon.vm06 (mon.0) 539 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.942924+0000 mon.vm06 (mon.0) 540 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.942924+0000 mon.vm06 (mon.0) 540 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.978818+0000 mon.vm06 (mon.0) 541 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:45.978818+0000 mon.vm06 (mon.0) 541 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.400904+0000 mon.vm06 (mon.0) 542 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.400904+0000 mon.vm06 (mon.0) 542 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.404436+0000 mon.vm06 (mon.0) 543 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.404436+0000 mon.vm06 (mon.0) 543 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.409217+0000 mon.vm06 (mon.0) 544 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.409217+0000 mon.vm06 (mon.0) 544 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.409279+0000 mon.vm06 (mon.0) 545 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.409279+0000 mon.vm06 (mon.0) 545 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.409312+0000 mon.vm06 (mon.0) 546 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.409312+0000 mon.vm06 (mon.0) 546 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: cluster 2026-03-10T12:44:46.419235+0000 mon.vm06 (mon.0) 547 : cluster [INF] osd.2 [v2:192.168.123.106:6810/1494747690,v1:192.168.123.106:6811/1494747690] boot 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: cluster 2026-03-10T12:44:46.419235+0000 mon.vm06 (mon.0) 547 : cluster [INF] osd.2 [v2:192.168.123.106:6810/1494747690,v1:192.168.123.106:6811/1494747690] boot 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: cluster 2026-03-10T12:44:46.419254+0000 mon.vm06 (mon.0) 548 : cluster [DBG] osdmap e19: 8 total, 4 up, 8 in 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: cluster 2026-03-10T12:44:46.419254+0000 mon.vm06 (mon.0) 548 : cluster [DBG] osdmap e19: 8 total, 4 up, 8 in 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.420011+0000 mon.vm06 (mon.0) 549 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.420011+0000 mon.vm06 (mon.0) 549 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.420118+0000 mon.vm06 (mon.0) 550 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.420118+0000 mon.vm06 (mon.0) 550 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.420197+0000 mon.vm06 (mon.0) 551 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.420197+0000 mon.vm06 (mon.0) 551 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.420233+0000 mon.vm06 (mon.0) 552 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.420233+0000 mon.vm06 (mon.0) 552 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.420266+0000 mon.vm06 (mon.0) 553 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.420266+0000 mon.vm06 (mon.0) 553 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.420296+0000 mon.vm06 (mon.0) 554 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.420296+0000 mon.vm06 (mon.0) 554 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.421228+0000 mon.vm06 (mon.0) 555 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.421228+0000 mon.vm06 (mon.0) 555 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.421667+0000 mon.vm06 (mon.0) 556 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.421667+0000 mon.vm06 (mon.0) 556 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.421783+0000 mon.vm06 (mon.0) 557 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:46.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:46 vm09 bash[21409]: audit 2026-03-10T12:44:46.421783+0000 mon.vm06 (mon.0) 557 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:44.759711+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:44.759711+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:44.759787+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:44.759787+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:45.342618+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:45.342618+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:45.342699+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:45.342699+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:45.354619+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:45.354619+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:45.354670+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:45.354670+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:45.939755+0000 mgr.vm06.cofomf (mgr.14193) 92 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:45.939755+0000 mgr.vm06.cofomf (mgr.14193) 92 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:46.949392+0000 mon.vm06 (mon.0) 558 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:46.949392+0000 mon.vm06 (mon.0) 558 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:46.991150+0000 mon.vm06 (mon.0) 559 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:46.991150+0000 mon.vm06 (mon.0) 559 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.007043+0000 mon.vm06 (mon.0) 560 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.007043+0000 mon.vm06 (mon.0) 560 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.399788+0000 mon.vm06 (mon.0) 561 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.399788+0000 mon.vm06 (mon.0) 561 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.412436+0000 mon.vm06 (mon.0) 562 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.412436+0000 mon.vm06 (mon.0) 562 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.412533+0000 mon.vm06 (mon.0) 563 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.412533+0000 mon.vm06 (mon.0) 563 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:47.416491+0000 mon.vm06 (mon.0) 564 : cluster [INF] osd.6 [v2:192.168.123.109:6824/176302434,v1:192.168.123.109:6825/176302434] boot 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:47.416491+0000 mon.vm06 (mon.0) 564 : cluster [INF] osd.6 [v2:192.168.123.109:6824/176302434,v1:192.168.123.109:6825/176302434] boot 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:47.416605+0000 mon.vm06 (mon.0) 565 : cluster [INF] osd.4 [v2:192.168.123.109:6816/1593991996,v1:192.168.123.109:6817/1593991996] boot 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:47.416605+0000 mon.vm06 (mon.0) 565 : cluster [INF] osd.4 [v2:192.168.123.109:6816/1593991996,v1:192.168.123.109:6817/1593991996] boot 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:47.416723+0000 mon.vm06 (mon.0) 566 : cluster [INF] osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564] boot 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:47.416723+0000 mon.vm06 (mon.0) 566 : cluster [INF] osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564] boot 2026-03-10T12:44:47.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:47.416883+0000 mon.vm06 (mon.0) 567 : cluster [DBG] osdmap e20: 8 total, 7 up, 8 in 2026-03-10T12:44:47.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: cluster 2026-03-10T12:44:47.416883+0000 mon.vm06 (mon.0) 567 : cluster [DBG] osdmap e20: 8 total, 7 up, 8 in 2026-03-10T12:44:47.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.419043+0000 mon.vm06 (mon.0) 568 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:47.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.419043+0000 mon.vm06 (mon.0) 568 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:47.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.419122+0000 mon.vm06 (mon.0) 569 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:47.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.419122+0000 mon.vm06 (mon.0) 569 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:47.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.419168+0000 mon.vm06 (mon.0) 570 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:47.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.419168+0000 mon.vm06 (mon.0) 570 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:47.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.419211+0000 mon.vm06 (mon.0) 571 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:47.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.419211+0000 mon.vm06 (mon.0) 571 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:47.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.425898+0000 mon.vm06 (mon.0) 572 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:47.848 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:47 vm06 bash[17497]: audit 2026-03-10T12:44:47.425898+0000 mon.vm06 (mon.0) 572 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:44.759711+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:44.759711+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:44.759787+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:44.759787+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:45.342618+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:45.342618+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:45.342699+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:45.342699+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:45.354619+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:45.354619+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:45.354670+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:45.354670+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:45.939755+0000 mgr.vm06.cofomf (mgr.14193) 92 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:45.939755+0000 mgr.vm06.cofomf (mgr.14193) 92 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:46.949392+0000 mon.vm06 (mon.0) 558 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:46.949392+0000 mon.vm06 (mon.0) 558 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:46.991150+0000 mon.vm06 (mon.0) 559 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:46.991150+0000 mon.vm06 (mon.0) 559 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.007043+0000 mon.vm06 (mon.0) 560 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.007043+0000 mon.vm06 (mon.0) 560 : audit [INF] from='osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564]' entity='osd.5' 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.399788+0000 mon.vm06 (mon.0) 561 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.399788+0000 mon.vm06 (mon.0) 561 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.412436+0000 mon.vm06 (mon.0) 562 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.412436+0000 mon.vm06 (mon.0) 562 : audit [INF] from='osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.412533+0000 mon.vm06 (mon.0) 563 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.412533+0000 mon.vm06 (mon.0) 563 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:47.416491+0000 mon.vm06 (mon.0) 564 : cluster [INF] osd.6 [v2:192.168.123.109:6824/176302434,v1:192.168.123.109:6825/176302434] boot 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:47.416491+0000 mon.vm06 (mon.0) 564 : cluster [INF] osd.6 [v2:192.168.123.109:6824/176302434,v1:192.168.123.109:6825/176302434] boot 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:47.416605+0000 mon.vm06 (mon.0) 565 : cluster [INF] osd.4 [v2:192.168.123.109:6816/1593991996,v1:192.168.123.109:6817/1593991996] boot 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:47.416605+0000 mon.vm06 (mon.0) 565 : cluster [INF] osd.4 [v2:192.168.123.109:6816/1593991996,v1:192.168.123.109:6817/1593991996] boot 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:47.416723+0000 mon.vm06 (mon.0) 566 : cluster [INF] osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564] boot 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:47.416723+0000 mon.vm06 (mon.0) 566 : cluster [INF] osd.5 [v2:192.168.123.106:6818/2068427564,v1:192.168.123.106:6819/2068427564] boot 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:47.416883+0000 mon.vm06 (mon.0) 567 : cluster [DBG] osdmap e20: 8 total, 7 up, 8 in 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: cluster 2026-03-10T12:44:47.416883+0000 mon.vm06 (mon.0) 567 : cluster [DBG] osdmap e20: 8 total, 7 up, 8 in 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.419043+0000 mon.vm06 (mon.0) 568 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.419043+0000 mon.vm06 (mon.0) 568 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.419122+0000 mon.vm06 (mon.0) 569 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.419122+0000 mon.vm06 (mon.0) 569 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.419168+0000 mon.vm06 (mon.0) 570 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:47.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.419168+0000 mon.vm06 (mon.0) 570 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:44:47.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.419211+0000 mon.vm06 (mon.0) 571 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:47.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.419211+0000 mon.vm06 (mon.0) 571 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:47.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.425898+0000 mon.vm06 (mon.0) 572 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:47.860 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:47 vm09 bash[21409]: audit 2026-03-10T12:44:47.425898+0000 mon.vm06 (mon.0) 572 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: audit 2026-03-10T12:44:47.660014+0000 mon.vm06 (mon.0) 573 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: audit 2026-03-10T12:44:47.660014+0000 mon.vm06 (mon.0) 573 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: audit 2026-03-10T12:44:47.666501+0000 mon.vm06 (mon.0) 574 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: audit 2026-03-10T12:44:47.666501+0000 mon.vm06 (mon.0) 574 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: cluster 2026-03-10T12:44:48.419092+0000 mon.vm06 (mon.0) 575 : cluster [INF] osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906] boot 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: cluster 2026-03-10T12:44:48.419092+0000 mon.vm06 (mon.0) 575 : cluster [INF] osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906] boot 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: cluster 2026-03-10T12:44:48.419117+0000 mon.vm06 (mon.0) 576 : cluster [DBG] osdmap e21: 8 total, 8 up, 8 in 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: cluster 2026-03-10T12:44:48.419117+0000 mon.vm06 (mon.0) 576 : cluster [DBG] osdmap e21: 8 total, 8 up, 8 in 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: audit 2026-03-10T12:44:48.419206+0000 mon.vm06 (mon.0) 577 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: audit 2026-03-10T12:44:48.419206+0000 mon.vm06 (mon.0) 577 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: audit 2026-03-10T12:44:48.441934+0000 mon.vm06 (mon.0) 578 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: audit 2026-03-10T12:44:48.441934+0000 mon.vm06 (mon.0) 578 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: audit 2026-03-10T12:44:48.446607+0000 mon.vm06 (mon.0) 579 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: audit 2026-03-10T12:44:48.446607+0000 mon.vm06 (mon.0) 579 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: audit 2026-03-10T12:44:48.488779+0000 mon.vm06 (mon.0) 580 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:44:49.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:48 vm06 bash[17497]: audit 2026-03-10T12:44:48.488779+0000 mon.vm06 (mon.0) 580 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: audit 2026-03-10T12:44:47.660014+0000 mon.vm06 (mon.0) 573 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: audit 2026-03-10T12:44:47.660014+0000 mon.vm06 (mon.0) 573 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: audit 2026-03-10T12:44:47.666501+0000 mon.vm06 (mon.0) 574 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: audit 2026-03-10T12:44:47.666501+0000 mon.vm06 (mon.0) 574 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: cluster 2026-03-10T12:44:48.419092+0000 mon.vm06 (mon.0) 575 : cluster [INF] osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906] boot 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: cluster 2026-03-10T12:44:48.419092+0000 mon.vm06 (mon.0) 575 : cluster [INF] osd.7 [v2:192.168.123.106:6826/3705412906,v1:192.168.123.106:6827/3705412906] boot 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: cluster 2026-03-10T12:44:48.419117+0000 mon.vm06 (mon.0) 576 : cluster [DBG] osdmap e21: 8 total, 8 up, 8 in 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: cluster 2026-03-10T12:44:48.419117+0000 mon.vm06 (mon.0) 576 : cluster [DBG] osdmap e21: 8 total, 8 up, 8 in 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: audit 2026-03-10T12:44:48.419206+0000 mon.vm06 (mon.0) 577 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: audit 2026-03-10T12:44:48.419206+0000 mon.vm06 (mon.0) 577 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: audit 2026-03-10T12:44:48.441934+0000 mon.vm06 (mon.0) 578 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: audit 2026-03-10T12:44:48.441934+0000 mon.vm06 (mon.0) 578 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: audit 2026-03-10T12:44:48.446607+0000 mon.vm06 (mon.0) 579 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: audit 2026-03-10T12:44:48.446607+0000 mon.vm06 (mon.0) 579 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: audit 2026-03-10T12:44:48.488779+0000 mon.vm06 (mon.0) 580 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:44:49.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:48 vm09 bash[21409]: audit 2026-03-10T12:44:48.488779+0000 mon.vm06 (mon.0) 580 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:44:50.026 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:44:50.068 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:49 vm09 bash[21409]: cluster 2026-03-10T12:44:46.955922+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:50.068 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:49 vm09 bash[21409]: cluster 2026-03-10T12:44:46.955922+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:50.068 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:49 vm09 bash[21409]: cluster 2026-03-10T12:44:46.956004+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:50.068 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:49 vm09 bash[21409]: cluster 2026-03-10T12:44:46.956004+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:50.068 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:49 vm09 bash[21409]: cluster 2026-03-10T12:44:47.939951+0000 mgr.vm06.cofomf (mgr.14193) 93 : cluster [DBG] pgmap v46: 1 pgs: 1 unknown; 0 B data, 958 MiB used, 119 GiB / 120 GiB avail 2026-03-10T12:44:50.068 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:49 vm09 bash[21409]: cluster 2026-03-10T12:44:47.939951+0000 mgr.vm06.cofomf (mgr.14193) 93 : cluster [DBG] pgmap v46: 1 pgs: 1 unknown; 0 B data, 958 MiB used, 119 GiB / 120 GiB avail 2026-03-10T12:44:50.068 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:49 vm09 bash[21409]: cluster 2026-03-10T12:44:49.420689+0000 mon.vm06 (mon.0) 581 : cluster [DBG] osdmap e22: 8 total, 8 up, 8 in 2026-03-10T12:44:50.068 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:49 vm09 bash[21409]: cluster 2026-03-10T12:44:49.420689+0000 mon.vm06 (mon.0) 581 : cluster [DBG] osdmap e22: 8 total, 8 up, 8 in 2026-03-10T12:44:50.088 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:49 vm06 bash[17497]: cluster 2026-03-10T12:44:46.955922+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:50.088 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:49 vm06 bash[17497]: cluster 2026-03-10T12:44:46.955922+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T12:44:50.088 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:49 vm06 bash[17497]: cluster 2026-03-10T12:44:46.956004+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:50.088 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:49 vm06 bash[17497]: cluster 2026-03-10T12:44:46.956004+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T12:44:50.088 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:49 vm06 bash[17497]: cluster 2026-03-10T12:44:47.939951+0000 mgr.vm06.cofomf (mgr.14193) 93 : cluster [DBG] pgmap v46: 1 pgs: 1 unknown; 0 B data, 958 MiB used, 119 GiB / 120 GiB avail 2026-03-10T12:44:50.088 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:49 vm06 bash[17497]: cluster 2026-03-10T12:44:47.939951+0000 mgr.vm06.cofomf (mgr.14193) 93 : cluster [DBG] pgmap v46: 1 pgs: 1 unknown; 0 B data, 958 MiB used, 119 GiB / 120 GiB avail 2026-03-10T12:44:50.088 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:49 vm06 bash[17497]: cluster 2026-03-10T12:44:49.420689+0000 mon.vm06 (mon.0) 581 : cluster [DBG] osdmap e22: 8 total, 8 up, 8 in 2026-03-10T12:44:50.088 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:49 vm06 bash[17497]: cluster 2026-03-10T12:44:49.420689+0000 mon.vm06 (mon.0) 581 : cluster [DBG] osdmap e22: 8 total, 8 up, 8 in 2026-03-10T12:44:50.302 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:44:50.368 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":22,"num_osds":8,"num_up_osds":8,"osd_up_since":1773146688,"num_in_osds":8,"osd_in_since":1773146667,"num_remapped_pgs":0} 2026-03-10T12:44:50.368 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd dump --format=json 2026-03-10T12:44:51.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:50 vm06 bash[17497]: audit 2026-03-10T12:44:50.303176+0000 mon.vm06 (mon.0) 582 : audit [DBG] from='client.? 192.168.123.106:0/4128211037' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:51.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:50 vm06 bash[17497]: audit 2026-03-10T12:44:50.303176+0000 mon.vm06 (mon.0) 582 : audit [DBG] from='client.? 192.168.123.106:0/4128211037' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:51.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:50 vm06 bash[17497]: cluster 2026-03-10T12:44:50.422880+0000 mon.vm06 (mon.0) 583 : cluster [DBG] osdmap e23: 8 total, 8 up, 8 in 2026-03-10T12:44:51.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:50 vm06 bash[17497]: cluster 2026-03-10T12:44:50.422880+0000 mon.vm06 (mon.0) 583 : cluster [DBG] osdmap e23: 8 total, 8 up, 8 in 2026-03-10T12:44:51.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:50 vm06 bash[17497]: audit 2026-03-10T12:44:50.676224+0000 mon.vm06 (mon.0) 584 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T12:44:51.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:50 vm06 bash[17497]: audit 2026-03-10T12:44:50.676224+0000 mon.vm06 (mon.0) 584 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T12:44:51.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:50 vm06 bash[17497]: audit 2026-03-10T12:44:50.692800+0000 mon.vm06 (mon.0) 585 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T12:44:51.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:50 vm06 bash[17497]: audit 2026-03-10T12:44:50.692800+0000 mon.vm06 (mon.0) 585 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T12:44:51.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:50 vm06 bash[17497]: audit 2026-03-10T12:44:50.692976+0000 mon.vm06 (mon.0) 586 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:44:51.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:50 vm06 bash[17497]: audit 2026-03-10T12:44:50.692976+0000 mon.vm06 (mon.0) 586 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:44:51.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:50 vm06 bash[17497]: audit 2026-03-10T12:44:50.693147+0000 mon.vm06 (mon.0) 587 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:44:51.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:50 vm06 bash[17497]: audit 2026-03-10T12:44:50.693147+0000 mon.vm06 (mon.0) 587 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:44:51.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:50 vm09 bash[21409]: audit 2026-03-10T12:44:50.303176+0000 mon.vm06 (mon.0) 582 : audit [DBG] from='client.? 192.168.123.106:0/4128211037' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:51.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:50 vm09 bash[21409]: audit 2026-03-10T12:44:50.303176+0000 mon.vm06 (mon.0) 582 : audit [DBG] from='client.? 192.168.123.106:0/4128211037' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:44:51.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:50 vm09 bash[21409]: cluster 2026-03-10T12:44:50.422880+0000 mon.vm06 (mon.0) 583 : cluster [DBG] osdmap e23: 8 total, 8 up, 8 in 2026-03-10T12:44:51.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:50 vm09 bash[21409]: cluster 2026-03-10T12:44:50.422880+0000 mon.vm06 (mon.0) 583 : cluster [DBG] osdmap e23: 8 total, 8 up, 8 in 2026-03-10T12:44:51.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:50 vm09 bash[21409]: audit 2026-03-10T12:44:50.676224+0000 mon.vm06 (mon.0) 584 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T12:44:51.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:50 vm09 bash[21409]: audit 2026-03-10T12:44:50.676224+0000 mon.vm06 (mon.0) 584 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T12:44:51.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:50 vm09 bash[21409]: audit 2026-03-10T12:44:50.692800+0000 mon.vm06 (mon.0) 585 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T12:44:51.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:50 vm09 bash[21409]: audit 2026-03-10T12:44:50.692800+0000 mon.vm06 (mon.0) 585 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T12:44:51.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:50 vm09 bash[21409]: audit 2026-03-10T12:44:50.692976+0000 mon.vm06 (mon.0) 586 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:44:51.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:50 vm09 bash[21409]: audit 2026-03-10T12:44:50.692976+0000 mon.vm06 (mon.0) 586 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:44:51.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:50 vm09 bash[21409]: audit 2026-03-10T12:44:50.693147+0000 mon.vm06 (mon.0) 587 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:44:51.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:50 vm09 bash[21409]: audit 2026-03-10T12:44:50.693147+0000 mon.vm06 (mon.0) 587 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:44:52.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:51 vm06 bash[17497]: cluster 2026-03-10T12:44:49.940171+0000 mgr.vm06.cofomf (mgr.14193) 94 : cluster [DBG] pgmap v49: 1 pgs: 1 creating+remapped; 0 B data, 1.4 GiB used, 159 GiB / 160 GiB avail 2026-03-10T12:44:52.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:51 vm06 bash[17497]: cluster 2026-03-10T12:44:49.940171+0000 mgr.vm06.cofomf (mgr.14193) 94 : cluster [DBG] pgmap v49: 1 pgs: 1 creating+remapped; 0 B data, 1.4 GiB used, 159 GiB / 160 GiB avail 2026-03-10T12:44:52.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:51 vm06 bash[17497]: audit 2026-03-10T12:44:50.694779+0000 mon.vm09 (mon.1) 20 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T12:44:52.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:51 vm06 bash[17497]: audit 2026-03-10T12:44:50.694779+0000 mon.vm09 (mon.1) 20 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T12:44:52.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:51 vm06 bash[17497]: audit 2026-03-10T12:44:50.695607+0000 mon.vm06 (mon.0) 588 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:44:52.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:51 vm06 bash[17497]: audit 2026-03-10T12:44:50.695607+0000 mon.vm06 (mon.0) 588 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:44:52.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:51 vm06 bash[17497]: audit 2026-03-10T12:44:50.695696+0000 mon.vm06 (mon.0) 589 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:44:52.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:51 vm06 bash[17497]: audit 2026-03-10T12:44:50.695696+0000 mon.vm06 (mon.0) 589 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:44:52.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:51 vm06 bash[17497]: audit 2026-03-10T12:44:50.714234+0000 mon.vm09 (mon.1) 21 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T12:44:52.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:51 vm06 bash[17497]: audit 2026-03-10T12:44:50.714234+0000 mon.vm09 (mon.1) 21 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T12:44:52.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:51 vm06 bash[17497]: cluster 2026-03-10T12:44:51.425801+0000 mon.vm06 (mon.0) 590 : cluster [DBG] osdmap e24: 8 total, 8 up, 8 in 2026-03-10T12:44:52.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:51 vm06 bash[17497]: cluster 2026-03-10T12:44:51.425801+0000 mon.vm06 (mon.0) 590 : cluster [DBG] osdmap e24: 8 total, 8 up, 8 in 2026-03-10T12:44:52.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:51 vm09 bash[21409]: cluster 2026-03-10T12:44:49.940171+0000 mgr.vm06.cofomf (mgr.14193) 94 : cluster [DBG] pgmap v49: 1 pgs: 1 creating+remapped; 0 B data, 1.4 GiB used, 159 GiB / 160 GiB avail 2026-03-10T12:44:52.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:51 vm09 bash[21409]: cluster 2026-03-10T12:44:49.940171+0000 mgr.vm06.cofomf (mgr.14193) 94 : cluster [DBG] pgmap v49: 1 pgs: 1 creating+remapped; 0 B data, 1.4 GiB used, 159 GiB / 160 GiB avail 2026-03-10T12:44:52.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:51 vm09 bash[21409]: audit 2026-03-10T12:44:50.694779+0000 mon.vm09 (mon.1) 20 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T12:44:52.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:51 vm09 bash[21409]: audit 2026-03-10T12:44:50.694779+0000 mon.vm09 (mon.1) 20 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T12:44:52.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:51 vm09 bash[21409]: audit 2026-03-10T12:44:50.695607+0000 mon.vm06 (mon.0) 588 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:44:52.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:51 vm09 bash[21409]: audit 2026-03-10T12:44:50.695607+0000 mon.vm06 (mon.0) 588 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm06"}]: dispatch 2026-03-10T12:44:52.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:51 vm09 bash[21409]: audit 2026-03-10T12:44:50.695696+0000 mon.vm06 (mon.0) 589 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:44:52.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:51 vm09 bash[21409]: audit 2026-03-10T12:44:50.695696+0000 mon.vm06 (mon.0) 589 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:44:52.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:51 vm09 bash[21409]: audit 2026-03-10T12:44:50.714234+0000 mon.vm09 (mon.1) 21 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T12:44:52.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:51 vm09 bash[21409]: audit 2026-03-10T12:44:50.714234+0000 mon.vm09 (mon.1) 21 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T12:44:52.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:51 vm09 bash[21409]: cluster 2026-03-10T12:44:51.425801+0000 mon.vm06 (mon.0) 590 : cluster [DBG] osdmap e24: 8 total, 8 up, 8 in 2026-03-10T12:44:52.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:51 vm09 bash[21409]: cluster 2026-03-10T12:44:51.425801+0000 mon.vm06 (mon.0) 590 : cluster [DBG] osdmap e24: 8 total, 8 up, 8 in 2026-03-10T12:44:54.080 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:44:54.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:53 vm06 bash[17497]: cluster 2026-03-10T12:44:51.940414+0000 mgr.vm06.cofomf (mgr.14193) 95 : cluster [DBG] pgmap v52: 1 pgs: 1 creating+remapped; 0 B data, 1011 MiB used, 159 GiB / 160 GiB avail 2026-03-10T12:44:54.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:53 vm06 bash[17497]: cluster 2026-03-10T12:44:51.940414+0000 mgr.vm06.cofomf (mgr.14193) 95 : cluster [DBG] pgmap v52: 1 pgs: 1 creating+remapped; 0 B data, 1011 MiB used, 159 GiB / 160 GiB avail 2026-03-10T12:44:54.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:53 vm06 bash[17497]: cluster 2026-03-10T12:44:52.741483+0000 mon.vm06 (mon.0) 591 : cluster [DBG] mgrmap e18: vm06.cofomf(active, since 80s), standbys: vm09.mcduck 2026-03-10T12:44:54.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:53 vm06 bash[17497]: cluster 2026-03-10T12:44:52.741483+0000 mon.vm06 (mon.0) 591 : cluster [DBG] mgrmap e18: vm06.cofomf(active, since 80s), standbys: vm09.mcduck 2026-03-10T12:44:54.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:53 vm06 bash[17497]: audit 2026-03-10T12:44:53.558576+0000 mon.vm06 (mon.0) 592 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:53 vm06 bash[17497]: audit 2026-03-10T12:44:53.558576+0000 mon.vm06 (mon.0) 592 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:53 vm06 bash[17497]: audit 2026-03-10T12:44:53.569545+0000 mon.vm06 (mon.0) 593 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:53 vm06 bash[17497]: audit 2026-03-10T12:44:53.569545+0000 mon.vm06 (mon.0) 593 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:53 vm06 bash[17497]: audit 2026-03-10T12:44:53.691496+0000 mon.vm06 (mon.0) 594 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:53 vm06 bash[17497]: audit 2026-03-10T12:44:53.691496+0000 mon.vm06 (mon.0) 594 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:53 vm06 bash[17497]: audit 2026-03-10T12:44:53.696668+0000 mon.vm06 (mon.0) 595 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:53 vm06 bash[17497]: audit 2026-03-10T12:44:53.696668+0000 mon.vm06 (mon.0) 595 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:53 vm09 bash[21409]: cluster 2026-03-10T12:44:51.940414+0000 mgr.vm06.cofomf (mgr.14193) 95 : cluster [DBG] pgmap v52: 1 pgs: 1 creating+remapped; 0 B data, 1011 MiB used, 159 GiB / 160 GiB avail 2026-03-10T12:44:54.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:53 vm09 bash[21409]: cluster 2026-03-10T12:44:51.940414+0000 mgr.vm06.cofomf (mgr.14193) 95 : cluster [DBG] pgmap v52: 1 pgs: 1 creating+remapped; 0 B data, 1011 MiB used, 159 GiB / 160 GiB avail 2026-03-10T12:44:54.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:53 vm09 bash[21409]: cluster 2026-03-10T12:44:52.741483+0000 mon.vm06 (mon.0) 591 : cluster [DBG] mgrmap e18: vm06.cofomf(active, since 80s), standbys: vm09.mcduck 2026-03-10T12:44:54.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:53 vm09 bash[21409]: cluster 2026-03-10T12:44:52.741483+0000 mon.vm06 (mon.0) 591 : cluster [DBG] mgrmap e18: vm06.cofomf(active, since 80s), standbys: vm09.mcduck 2026-03-10T12:44:54.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:53 vm09 bash[21409]: audit 2026-03-10T12:44:53.558576+0000 mon.vm06 (mon.0) 592 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:53 vm09 bash[21409]: audit 2026-03-10T12:44:53.558576+0000 mon.vm06 (mon.0) 592 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:53 vm09 bash[21409]: audit 2026-03-10T12:44:53.569545+0000 mon.vm06 (mon.0) 593 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:53 vm09 bash[21409]: audit 2026-03-10T12:44:53.569545+0000 mon.vm06 (mon.0) 593 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:53 vm09 bash[21409]: audit 2026-03-10T12:44:53.691496+0000 mon.vm06 (mon.0) 594 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:53 vm09 bash[21409]: audit 2026-03-10T12:44:53.691496+0000 mon.vm06 (mon.0) 594 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:53 vm09 bash[21409]: audit 2026-03-10T12:44:53.696668+0000 mon.vm06 (mon.0) 595 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:53 vm09 bash[21409]: audit 2026-03-10T12:44:53.696668+0000 mon.vm06 (mon.0) 595 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:54.347 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:44:54.347 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":24,"fsid":"68e2be40-1c7e-11f1-b779-df2955349a39","created":"2026-03-10T12:42:30.013872+0000","modified":"2026-03-10T12:44:51.420537+0000","last_up_change":"2026-03-10T12:44:48.412345+0000","last_in_change":"2026-03-10T12:44:27.405866+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":9,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T12:44:45.983537+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"24","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"7f2eb4cc-66ba-45fb-9311-be96c8a18633","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":920523896},{"type":"v1","addr":"192.168.123.109:6801","nonce":920523896}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":920523896},{"type":"v1","addr":"192.168.123.109:6803","nonce":920523896}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":920523896},{"type":"v1","addr":"192.168.123.109:6807","nonce":920523896}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":920523896},{"type":"v1","addr":"192.168.123.109:6805","nonce":920523896}]},"public_addr":"192.168.123.109:6801/920523896","cluster_addr":"192.168.123.109:6803/920523896","heartbeat_back_addr":"192.168.123.109:6807/920523896","heartbeat_front_addr":"192.168.123.109:6805/920523896","state":["exists","up"]},{"osd":1,"uuid":"bdbd3134-047c-4796-a7c4-704227861edc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":19,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6802","nonce":667081414},{"type":"v1","addr":"192.168.123.106:6803","nonce":667081414}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6804","nonce":667081414},{"type":"v1","addr":"192.168.123.106:6805","nonce":667081414}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6808","nonce":667081414},{"type":"v1","addr":"192.168.123.106:6809","nonce":667081414}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6806","nonce":667081414},{"type":"v1","addr":"192.168.123.106:6807","nonce":667081414}]},"public_addr":"192.168.123.106:6803/667081414","cluster_addr":"192.168.123.106:6805/667081414","heartbeat_back_addr":"192.168.123.106:6809/667081414","heartbeat_front_addr":"192.168.123.106:6807/667081414","state":["exists","up"]},{"osd":2,"uuid":"ac7e07e1-6b13-4553-a71e-9ffd56a18bd7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6810","nonce":1494747690},{"type":"v1","addr":"192.168.123.106:6811","nonce":1494747690}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6812","nonce":1494747690},{"type":"v1","addr":"192.168.123.106:6813","nonce":1494747690}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6816","nonce":1494747690},{"type":"v1","addr":"192.168.123.106:6817","nonce":1494747690}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6814","nonce":1494747690},{"type":"v1","addr":"192.168.123.106:6815","nonce":1494747690}]},"public_addr":"192.168.123.106:6811/1494747690","cluster_addr":"192.168.123.106:6813/1494747690","heartbeat_back_addr":"192.168.123.106:6817/1494747690","heartbeat_front_addr":"192.168.123.106:6815/1494747690","state":["exists","up"]},{"osd":3,"uuid":"fcac5ce6-457a-460f-a4b9-c37d8346929c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":1289218013},{"type":"v1","addr":"192.168.123.109:6809","nonce":1289218013}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":1289218013},{"type":"v1","addr":"192.168.123.109:6811","nonce":1289218013}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":1289218013},{"type":"v1","addr":"192.168.123.109:6815","nonce":1289218013}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":1289218013},{"type":"v1","addr":"192.168.123.109:6813","nonce":1289218013}]},"public_addr":"192.168.123.109:6809/1289218013","cluster_addr":"192.168.123.109:6811/1289218013","heartbeat_back_addr":"192.168.123.109:6815/1289218013","heartbeat_front_addr":"192.168.123.109:6813/1289218013","state":["exists","up"]},{"osd":4,"uuid":"ac094c73-334f-420d-9435-350954d4fcfe","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":1593991996},{"type":"v1","addr":"192.168.123.109:6817","nonce":1593991996}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":1593991996},{"type":"v1","addr":"192.168.123.109:6819","nonce":1593991996}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":1593991996},{"type":"v1","addr":"192.168.123.109:6823","nonce":1593991996}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":1593991996},{"type":"v1","addr":"192.168.123.109:6821","nonce":1593991996}]},"public_addr":"192.168.123.109:6817/1593991996","cluster_addr":"192.168.123.109:6819/1593991996","heartbeat_back_addr":"192.168.123.109:6823/1593991996","heartbeat_front_addr":"192.168.123.109:6821/1593991996","state":["exists","up"]},{"osd":5,"uuid":"11f6c435-3f65-46bf-a53f-4c9da72c0aa3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6818","nonce":2068427564},{"type":"v1","addr":"192.168.123.106:6819","nonce":2068427564}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6820","nonce":2068427564},{"type":"v1","addr":"192.168.123.106:6821","nonce":2068427564}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6824","nonce":2068427564},{"type":"v1","addr":"192.168.123.106:6825","nonce":2068427564}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6822","nonce":2068427564},{"type":"v1","addr":"192.168.123.106:6823","nonce":2068427564}]},"public_addr":"192.168.123.106:6819/2068427564","cluster_addr":"192.168.123.106:6821/2068427564","heartbeat_back_addr":"192.168.123.106:6825/2068427564","heartbeat_front_addr":"192.168.123.106:6823/2068427564","state":["exists","up"]},{"osd":6,"uuid":"9d349e15-2ef2-47c0-87db-887b3e5b91c1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":176302434},{"type":"v1","addr":"192.168.123.109:6825","nonce":176302434}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":176302434},{"type":"v1","addr":"192.168.123.109:6827","nonce":176302434}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":176302434},{"type":"v1","addr":"192.168.123.109:6831","nonce":176302434}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":176302434},{"type":"v1","addr":"192.168.123.109:6829","nonce":176302434}]},"public_addr":"192.168.123.109:6825/176302434","cluster_addr":"192.168.123.109:6827/176302434","heartbeat_back_addr":"192.168.123.109:6831/176302434","heartbeat_front_addr":"192.168.123.109:6829/176302434","state":["exists","up"]},{"osd":7,"uuid":"96013d1a-8fdb-4e98-8244-f62c64e15111","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":22,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6826","nonce":3705412906},{"type":"v1","addr":"192.168.123.106:6827","nonce":3705412906}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6828","nonce":3705412906},{"type":"v1","addr":"192.168.123.106:6829","nonce":3705412906}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6832","nonce":3705412906},{"type":"v1","addr":"192.168.123.106:6833","nonce":3705412906}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6830","nonce":3705412906},{"type":"v1","addr":"192.168.123.106:6831","nonce":3705412906}]},"public_addr":"192.168.123.106:6827/3705412906","cluster_addr":"192.168.123.106:6829/3705412906","heartbeat_back_addr":"192.168.123.106:6833/3705412906","heartbeat_front_addr":"192.168.123.106:6831/3705412906","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:41.447081+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:42.705101+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:43.914941+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:43.492128+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:44.759789+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:45.354681+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:45.342701+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:46.956005+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.106:0/767330076":"2026-03-11T12:42:40.276904+0000","192.168.123.106:0/2002880734":"2026-03-11T12:42:53.374279+0000","192.168.123.106:6800/2737563506":"2026-03-11T12:42:40.276904+0000","192.168.123.106:0/1935277119":"2026-03-11T12:42:40.276904+0000","192.168.123.106:6800/3515922276":"2026-03-11T12:43:31.922790+0000","192.168.123.106:0/3951003069":"2026-03-11T12:43:31.922790+0000","192.168.123.106:0/721737214":"2026-03-11T12:42:40.276904+0000","192.168.123.106:6801/2737563506":"2026-03-11T12:42:40.276904+0000","192.168.123.106:0/2702629404":"2026-03-11T12:42:53.374279+0000","192.168.123.106:6800/702909729":"2026-03-11T12:42:53.374279+0000","192.168.123.106:0/627475541":"2026-03-11T12:43:31.922790+0000","192.168.123.106:6801/702909729":"2026-03-11T12:42:53.374279+0000","192.168.123.106:0/2401439199":"2026-03-11T12:43:31.922790+0000","192.168.123.106:0/3253637980":"2026-03-11T12:42:53.374279+0000","192.168.123.106:6801/3515922276":"2026-03-11T12:43:31.922790+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T12:44:54.405 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T12:44:45.983537+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '24', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T12:44:54.405 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd pool get .mgr pg_num 2026-03-10T12:44:55.068 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:54 vm09 bash[21409]: cluster 2026-03-10T12:44:53.940791+0000 mgr.vm06.cofomf (mgr.14193) 96 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:44:55.068 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:54 vm09 bash[21409]: cluster 2026-03-10T12:44:53.940791+0000 mgr.vm06.cofomf (mgr.14193) 96 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:44:55.068 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:54 vm09 bash[21409]: audit 2026-03-10T12:44:54.346282+0000 mon.vm09 (mon.1) 22 : audit [DBG] from='client.? 192.168.123.106:0/3048047695' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:44:55.068 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:54 vm09 bash[21409]: audit 2026-03-10T12:44:54.346282+0000 mon.vm09 (mon.1) 22 : audit [DBG] from='client.? 192.168.123.106:0/3048047695' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:44:55.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:54 vm06 bash[17497]: cluster 2026-03-10T12:44:53.940791+0000 mgr.vm06.cofomf (mgr.14193) 96 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:44:55.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:54 vm06 bash[17497]: cluster 2026-03-10T12:44:53.940791+0000 mgr.vm06.cofomf (mgr.14193) 96 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:44:55.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:54 vm06 bash[17497]: audit 2026-03-10T12:44:54.346282+0000 mon.vm09 (mon.1) 22 : audit [DBG] from='client.? 192.168.123.106:0/3048047695' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:44:55.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:54 vm06 bash[17497]: audit 2026-03-10T12:44:54.346282+0000 mon.vm09 (mon.1) 22 : audit [DBG] from='client.? 192.168.123.106:0/3048047695' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:44:57.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:56 vm06 bash[17497]: cluster 2026-03-10T12:44:55.941170+0000 mgr.vm06.cofomf (mgr.14193) 97 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:44:57.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:56 vm06 bash[17497]: cluster 2026-03-10T12:44:55.941170+0000 mgr.vm06.cofomf (mgr.14193) 97 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:44:57.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:56 vm09 bash[21409]: cluster 2026-03-10T12:44:55.941170+0000 mgr.vm06.cofomf (mgr.14193) 97 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:44:57.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:56 vm09 bash[21409]: cluster 2026-03-10T12:44:55.941170+0000 mgr.vm06.cofomf (mgr.14193) 97 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:44:58.115 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:44:58.437 INFO:teuthology.orchestra.run.vm06.stdout:pg_num: 1 2026-03-10T12:44:58.500 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T12:44:58.500 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T12:44:59.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:59 vm06 bash[17497]: cluster 2026-03-10T12:44:57.941397+0000 mgr.vm06.cofomf (mgr.14193) 98 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:44:59.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:59 vm06 bash[17497]: cluster 2026-03-10T12:44:57.941397+0000 mgr.vm06.cofomf (mgr.14193) 98 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:44:59.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:59 vm06 bash[17497]: audit 2026-03-10T12:44:58.437926+0000 mon.vm06 (mon.0) 596 : audit [DBG] from='client.? 192.168.123.106:0/1761427168' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T12:44:59.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:59 vm06 bash[17497]: audit 2026-03-10T12:44:58.437926+0000 mon.vm06 (mon.0) 596 : audit [DBG] from='client.? 192.168.123.106:0/1761427168' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T12:44:59.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:59 vm06 bash[17497]: audit 2026-03-10T12:44:58.997035+0000 mon.vm06 (mon.0) 597 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:59.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:44:59 vm06 bash[17497]: audit 2026-03-10T12:44:58.997035+0000 mon.vm06 (mon.0) 597 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:59.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:59 vm09 bash[21409]: cluster 2026-03-10T12:44:57.941397+0000 mgr.vm06.cofomf (mgr.14193) 98 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:44:59.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:59 vm09 bash[21409]: cluster 2026-03-10T12:44:57.941397+0000 mgr.vm06.cofomf (mgr.14193) 98 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:44:59.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:59 vm09 bash[21409]: audit 2026-03-10T12:44:58.437926+0000 mon.vm06 (mon.0) 596 : audit [DBG] from='client.? 192.168.123.106:0/1761427168' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T12:44:59.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:59 vm09 bash[21409]: audit 2026-03-10T12:44:58.437926+0000 mon.vm06 (mon.0) 596 : audit [DBG] from='client.? 192.168.123.106:0/1761427168' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T12:44:59.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:59 vm09 bash[21409]: audit 2026-03-10T12:44:58.997035+0000 mon.vm06 (mon.0) 597 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:44:59.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:44:59 vm09 bash[21409]: audit 2026-03-10T12:44:58.997035+0000 mon.vm06 (mon.0) 597 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: cephadm 2026-03-10T12:44:58.991366+0000 mgr.vm06.cofomf (mgr.14193) 99 : cephadm [INF] Detected new or changed devices on vm06 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: cephadm 2026-03-10T12:44:58.991366+0000 mgr.vm06.cofomf (mgr.14193) 99 : cephadm [INF] Detected new or changed devices on vm06 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.001327+0000 mon.vm06 (mon.0) 598 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.001327+0000 mon.vm06 (mon.0) 598 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.002513+0000 mon.vm06 (mon.0) 599 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.002513+0000 mon.vm06 (mon.0) 599 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: cephadm 2026-03-10T12:44:59.431761+0000 mgr.vm06.cofomf (mgr.14193) 100 : cephadm [INF] Detected new or changed devices on vm09 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: cephadm 2026-03-10T12:44:59.431761+0000 mgr.vm06.cofomf (mgr.14193) 100 : cephadm [INF] Detected new or changed devices on vm09 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.437279+0000 mon.vm06 (mon.0) 600 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.437279+0000 mon.vm06 (mon.0) 600 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.441108+0000 mon.vm06 (mon.0) 601 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.441108+0000 mon.vm06 (mon.0) 601 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.441897+0000 mon.vm06 (mon.0) 602 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.441897+0000 mon.vm06 (mon.0) 602 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.442460+0000 mon.vm06 (mon.0) 603 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.442460+0000 mon.vm06 (mon.0) 603 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.442843+0000 mon.vm06 (mon.0) 604 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.442843+0000 mon.vm06 (mon.0) 604 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.446182+0000 mon.vm06 (mon.0) 605 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.446182+0000 mon.vm06 (mon.0) 605 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.447478+0000 mon.vm06 (mon.0) 606 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:45:00.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:00 vm06 bash[17497]: audit 2026-03-10T12:44:59.447478+0000 mon.vm06 (mon.0) 606 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: cephadm 2026-03-10T12:44:58.991366+0000 mgr.vm06.cofomf (mgr.14193) 99 : cephadm [INF] Detected new or changed devices on vm06 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: cephadm 2026-03-10T12:44:58.991366+0000 mgr.vm06.cofomf (mgr.14193) 99 : cephadm [INF] Detected new or changed devices on vm06 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.001327+0000 mon.vm06 (mon.0) 598 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.001327+0000 mon.vm06 (mon.0) 598 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.002513+0000 mon.vm06 (mon.0) 599 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.002513+0000 mon.vm06 (mon.0) 599 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: cephadm 2026-03-10T12:44:59.431761+0000 mgr.vm06.cofomf (mgr.14193) 100 : cephadm [INF] Detected new or changed devices on vm09 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: cephadm 2026-03-10T12:44:59.431761+0000 mgr.vm06.cofomf (mgr.14193) 100 : cephadm [INF] Detected new or changed devices on vm09 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.437279+0000 mon.vm06 (mon.0) 600 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.437279+0000 mon.vm06 (mon.0) 600 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.441108+0000 mon.vm06 (mon.0) 601 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.441108+0000 mon.vm06 (mon.0) 601 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.441897+0000 mon.vm06 (mon.0) 602 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.441897+0000 mon.vm06 (mon.0) 602 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.442460+0000 mon.vm06 (mon.0) 603 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.442460+0000 mon.vm06 (mon.0) 603 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.442843+0000 mon.vm06 (mon.0) 604 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.442843+0000 mon.vm06 (mon.0) 604 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.446182+0000 mon.vm06 (mon.0) 605 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.446182+0000 mon.vm06 (mon.0) 605 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.447478+0000 mon.vm06 (mon.0) 606 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:45:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:00 vm09 bash[21409]: audit 2026-03-10T12:44:59.447478+0000 mon.vm06 (mon.0) 606 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:45:01.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:01 vm06 bash[17497]: cluster 2026-03-10T12:44:59.941686+0000 mgr.vm06.cofomf (mgr.14193) 101 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:01.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:01 vm06 bash[17497]: cluster 2026-03-10T12:44:59.941686+0000 mgr.vm06.cofomf (mgr.14193) 101 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:01.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:01 vm09 bash[21409]: cluster 2026-03-10T12:44:59.941686+0000 mgr.vm06.cofomf (mgr.14193) 101 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:01.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:01 vm09 bash[21409]: cluster 2026-03-10T12:44:59.941686+0000 mgr.vm06.cofomf (mgr.14193) 101 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:02.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:02 vm06 bash[17497]: audit 2026-03-10T12:45:01.990493+0000 mon.vm06 (mon.0) 607 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:02.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:02 vm06 bash[17497]: audit 2026-03-10T12:45:01.990493+0000 mon.vm06 (mon.0) 607 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:02.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:02 vm09 bash[21409]: audit 2026-03-10T12:45:01.990493+0000 mon.vm06 (mon.0) 607 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:02.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:02 vm09 bash[21409]: audit 2026-03-10T12:45:01.990493+0000 mon.vm06 (mon.0) 607 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:03.150 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:03.438 INFO:teuthology.orchestra.run.vm06.stdout:[client.0] 2026-03-10T12:45:03.438 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBPErBpeBnmGRAAvCl2egTRd+OQwk+lec71iA== 2026-03-10T12:45:03.500 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T12:45:03.501 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T12:45:03.501 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T12:45:03.512 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T12:45:03.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:03 vm06 bash[17497]: cluster 2026-03-10T12:45:01.942000+0000 mgr.vm06.cofomf (mgr.14193) 102 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:03.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:03 vm06 bash[17497]: cluster 2026-03-10T12:45:01.942000+0000 mgr.vm06.cofomf (mgr.14193) 102 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:03.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:03 vm06 bash[17497]: audit 2026-03-10T12:45:03.434404+0000 mon.vm06 (mon.0) 608 : audit [INF] from='client.? 192.168.123.106:0/3668655831' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:45:03.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:03 vm06 bash[17497]: audit 2026-03-10T12:45:03.434404+0000 mon.vm06 (mon.0) 608 : audit [INF] from='client.? 192.168.123.106:0/3668655831' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:45:03.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:03 vm06 bash[17497]: audit 2026-03-10T12:45:03.437286+0000 mon.vm06 (mon.0) 609 : audit [INF] from='client.? 192.168.123.106:0/3668655831' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T12:45:03.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:03 vm06 bash[17497]: audit 2026-03-10T12:45:03.437286+0000 mon.vm06 (mon.0) 609 : audit [INF] from='client.? 192.168.123.106:0/3668655831' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T12:45:03.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:03 vm09 bash[21409]: cluster 2026-03-10T12:45:01.942000+0000 mgr.vm06.cofomf (mgr.14193) 102 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:03.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:03 vm09 bash[21409]: cluster 2026-03-10T12:45:01.942000+0000 mgr.vm06.cofomf (mgr.14193) 102 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:03.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:03 vm09 bash[21409]: audit 2026-03-10T12:45:03.434404+0000 mon.vm06 (mon.0) 608 : audit [INF] from='client.? 192.168.123.106:0/3668655831' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:45:03.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:03 vm09 bash[21409]: audit 2026-03-10T12:45:03.434404+0000 mon.vm06 (mon.0) 608 : audit [INF] from='client.? 192.168.123.106:0/3668655831' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:45:03.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:03 vm09 bash[21409]: audit 2026-03-10T12:45:03.437286+0000 mon.vm06 (mon.0) 609 : audit [INF] from='client.? 192.168.123.106:0/3668655831' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T12:45:03.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:03 vm09 bash[21409]: audit 2026-03-10T12:45:03.437286+0000 mon.vm06 (mon.0) 609 : audit [INF] from='client.? 192.168.123.106:0/3668655831' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T12:45:05.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:05 vm06 bash[17497]: cluster 2026-03-10T12:45:03.942307+0000 mgr.vm06.cofomf (mgr.14193) 103 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:05.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:05 vm06 bash[17497]: cluster 2026-03-10T12:45:03.942307+0000 mgr.vm06.cofomf (mgr.14193) 103 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:05.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:05 vm09 bash[21409]: cluster 2026-03-10T12:45:03.942307+0000 mgr.vm06.cofomf (mgr.14193) 103 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:05.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:05 vm09 bash[21409]: cluster 2026-03-10T12:45:03.942307+0000 mgr.vm06.cofomf (mgr.14193) 103 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:07.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:07 vm06 bash[17497]: cluster 2026-03-10T12:45:05.942597+0000 mgr.vm06.cofomf (mgr.14193) 104 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:07.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:07 vm06 bash[17497]: cluster 2026-03-10T12:45:05.942597+0000 mgr.vm06.cofomf (mgr.14193) 104 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:07.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:07 vm09 bash[21409]: cluster 2026-03-10T12:45:05.942597+0000 mgr.vm06.cofomf (mgr.14193) 104 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:07.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:07 vm09 bash[21409]: cluster 2026-03-10T12:45:05.942597+0000 mgr.vm06.cofomf (mgr.14193) 104 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:08.141 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm09/config 2026-03-10T12:45:08.435 INFO:teuthology.orchestra.run.vm09.stdout:[client.1] 2026-03-10T12:45:08.435 INFO:teuthology.orchestra.run.vm09.stdout: key = AQBUErBpc/ikGRAAqCBEe0eVD4wa4bOVDgf/jg== 2026-03-10T12:45:08.505 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:45:08.505 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T12:45:08.505 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T12:45:08.516 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T12:45:08.516 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T12:45:08.516 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph mgr dump --format=json 2026-03-10T12:45:08.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:08 vm06 bash[17497]: audit 2026-03-10T12:45:08.429111+0000 mon.vm09 (mon.1) 23 : audit [INF] from='client.? 192.168.123.109:0/3574850870' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:45:08.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:08 vm06 bash[17497]: audit 2026-03-10T12:45:08.429111+0000 mon.vm09 (mon.1) 23 : audit [INF] from='client.? 192.168.123.109:0/3574850870' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:45:08.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:08 vm06 bash[17497]: audit 2026-03-10T12:45:08.430153+0000 mon.vm06 (mon.0) 610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:45:08.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:08 vm06 bash[17497]: audit 2026-03-10T12:45:08.430153+0000 mon.vm06 (mon.0) 610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:45:08.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:08 vm06 bash[17497]: audit 2026-03-10T12:45:08.432594+0000 mon.vm06 (mon.0) 611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T12:45:08.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:08 vm06 bash[17497]: audit 2026-03-10T12:45:08.432594+0000 mon.vm06 (mon.0) 611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T12:45:08.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:08 vm09 bash[21409]: audit 2026-03-10T12:45:08.429111+0000 mon.vm09 (mon.1) 23 : audit [INF] from='client.? 192.168.123.109:0/3574850870' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:45:08.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:08 vm09 bash[21409]: audit 2026-03-10T12:45:08.429111+0000 mon.vm09 (mon.1) 23 : audit [INF] from='client.? 192.168.123.109:0/3574850870' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:45:08.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:08 vm09 bash[21409]: audit 2026-03-10T12:45:08.430153+0000 mon.vm06 (mon.0) 610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:45:08.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:08 vm09 bash[21409]: audit 2026-03-10T12:45:08.430153+0000 mon.vm06 (mon.0) 610 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:45:08.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:08 vm09 bash[21409]: audit 2026-03-10T12:45:08.432594+0000 mon.vm06 (mon.0) 611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T12:45:08.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:08 vm09 bash[21409]: audit 2026-03-10T12:45:08.432594+0000 mon.vm06 (mon.0) 611 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T12:45:09.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:09 vm06 bash[17497]: cluster 2026-03-10T12:45:07.942866+0000 mgr.vm06.cofomf (mgr.14193) 105 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:09.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:09 vm06 bash[17497]: cluster 2026-03-10T12:45:07.942866+0000 mgr.vm06.cofomf (mgr.14193) 105 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:09.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:09 vm09 bash[21409]: cluster 2026-03-10T12:45:07.942866+0000 mgr.vm06.cofomf (mgr.14193) 105 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:09.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:09 vm09 bash[21409]: cluster 2026-03-10T12:45:07.942866+0000 mgr.vm06.cofomf (mgr.14193) 105 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:11.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:11 vm06 bash[17497]: cluster 2026-03-10T12:45:09.943152+0000 mgr.vm06.cofomf (mgr.14193) 106 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:11.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:11 vm06 bash[17497]: cluster 2026-03-10T12:45:09.943152+0000 mgr.vm06.cofomf (mgr.14193) 106 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:11.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:11 vm09 bash[21409]: cluster 2026-03-10T12:45:09.943152+0000 mgr.vm06.cofomf (mgr.14193) 106 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:11.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:11 vm09 bash[21409]: cluster 2026-03-10T12:45:09.943152+0000 mgr.vm06.cofomf (mgr.14193) 106 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:13.159 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:13.455 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:45:13.517 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":18,"flags":0,"active_gid":14193,"active_name":"vm06.cofomf","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6800","nonce":3649404803},{"type":"v1","addr":"192.168.123.106:6801","nonce":3649404803}]},"active_addr":"192.168.123.106:6801/3649404803","active_change":"2026-03-10T12:43:31.923059+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":14208,"name":"vm09.mcduck","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.106:8443/","prometheus":"http://192.168.123.106:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":5,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.106:0","nonce":1468498006}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.106:0","nonce":1533283034}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.106:0","nonce":2889898445}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.106:0","nonce":2524599659}]}]} 2026-03-10T12:45:13.519 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T12:45:13.519 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T12:45:13.519 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd dump --format=json 2026-03-10T12:45:13.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:13 vm06 bash[17497]: cluster 2026-03-10T12:45:11.943404+0000 mgr.vm06.cofomf (mgr.14193) 107 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:13.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:13 vm06 bash[17497]: cluster 2026-03-10T12:45:11.943404+0000 mgr.vm06.cofomf (mgr.14193) 107 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:13.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:13 vm06 bash[17497]: audit 2026-03-10T12:45:13.452560+0000 mon.vm06 (mon.0) 612 : audit [DBG] from='client.? 192.168.123.106:0/3250803501' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T12:45:13.847 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:13 vm06 bash[17497]: audit 2026-03-10T12:45:13.452560+0000 mon.vm06 (mon.0) 612 : audit [DBG] from='client.? 192.168.123.106:0/3250803501' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T12:45:13.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:13 vm09 bash[21409]: cluster 2026-03-10T12:45:11.943404+0000 mgr.vm06.cofomf (mgr.14193) 107 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:13.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:13 vm09 bash[21409]: cluster 2026-03-10T12:45:11.943404+0000 mgr.vm06.cofomf (mgr.14193) 107 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:13.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:13 vm09 bash[21409]: audit 2026-03-10T12:45:13.452560+0000 mon.vm06 (mon.0) 612 : audit [DBG] from='client.? 192.168.123.106:0/3250803501' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T12:45:13.859 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:13 vm09 bash[21409]: audit 2026-03-10T12:45:13.452560+0000 mon.vm06 (mon.0) 612 : audit [DBG] from='client.? 192.168.123.106:0/3250803501' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T12:45:16.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:15 vm06 bash[17497]: cluster 2026-03-10T12:45:13.943769+0000 mgr.vm06.cofomf (mgr.14193) 108 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:16.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:15 vm06 bash[17497]: cluster 2026-03-10T12:45:13.943769+0000 mgr.vm06.cofomf (mgr.14193) 108 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:16.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:15 vm09 bash[21409]: cluster 2026-03-10T12:45:13.943769+0000 mgr.vm06.cofomf (mgr.14193) 108 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:16.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:15 vm09 bash[21409]: cluster 2026-03-10T12:45:13.943769+0000 mgr.vm06.cofomf (mgr.14193) 108 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:17.194 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:18.011 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:45:18.012 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":24,"fsid":"68e2be40-1c7e-11f1-b779-df2955349a39","created":"2026-03-10T12:42:30.013872+0000","modified":"2026-03-10T12:44:51.420537+0000","last_up_change":"2026-03-10T12:44:48.412345+0000","last_in_change":"2026-03-10T12:44:27.405866+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":9,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T12:44:45.983537+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"24","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"7f2eb4cc-66ba-45fb-9311-be96c8a18633","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":920523896},{"type":"v1","addr":"192.168.123.109:6801","nonce":920523896}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":920523896},{"type":"v1","addr":"192.168.123.109:6803","nonce":920523896}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":920523896},{"type":"v1","addr":"192.168.123.109:6807","nonce":920523896}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":920523896},{"type":"v1","addr":"192.168.123.109:6805","nonce":920523896}]},"public_addr":"192.168.123.109:6801/920523896","cluster_addr":"192.168.123.109:6803/920523896","heartbeat_back_addr":"192.168.123.109:6807/920523896","heartbeat_front_addr":"192.168.123.109:6805/920523896","state":["exists","up"]},{"osd":1,"uuid":"bdbd3134-047c-4796-a7c4-704227861edc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":19,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6802","nonce":667081414},{"type":"v1","addr":"192.168.123.106:6803","nonce":667081414}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6804","nonce":667081414},{"type":"v1","addr":"192.168.123.106:6805","nonce":667081414}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6808","nonce":667081414},{"type":"v1","addr":"192.168.123.106:6809","nonce":667081414}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6806","nonce":667081414},{"type":"v1","addr":"192.168.123.106:6807","nonce":667081414}]},"public_addr":"192.168.123.106:6803/667081414","cluster_addr":"192.168.123.106:6805/667081414","heartbeat_back_addr":"192.168.123.106:6809/667081414","heartbeat_front_addr":"192.168.123.106:6807/667081414","state":["exists","up"]},{"osd":2,"uuid":"ac7e07e1-6b13-4553-a71e-9ffd56a18bd7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6810","nonce":1494747690},{"type":"v1","addr":"192.168.123.106:6811","nonce":1494747690}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6812","nonce":1494747690},{"type":"v1","addr":"192.168.123.106:6813","nonce":1494747690}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6816","nonce":1494747690},{"type":"v1","addr":"192.168.123.106:6817","nonce":1494747690}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6814","nonce":1494747690},{"type":"v1","addr":"192.168.123.106:6815","nonce":1494747690}]},"public_addr":"192.168.123.106:6811/1494747690","cluster_addr":"192.168.123.106:6813/1494747690","heartbeat_back_addr":"192.168.123.106:6817/1494747690","heartbeat_front_addr":"192.168.123.106:6815/1494747690","state":["exists","up"]},{"osd":3,"uuid":"fcac5ce6-457a-460f-a4b9-c37d8346929c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":1289218013},{"type":"v1","addr":"192.168.123.109:6809","nonce":1289218013}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":1289218013},{"type":"v1","addr":"192.168.123.109:6811","nonce":1289218013}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":1289218013},{"type":"v1","addr":"192.168.123.109:6815","nonce":1289218013}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":1289218013},{"type":"v1","addr":"192.168.123.109:6813","nonce":1289218013}]},"public_addr":"192.168.123.109:6809/1289218013","cluster_addr":"192.168.123.109:6811/1289218013","heartbeat_back_addr":"192.168.123.109:6815/1289218013","heartbeat_front_addr":"192.168.123.109:6813/1289218013","state":["exists","up"]},{"osd":4,"uuid":"ac094c73-334f-420d-9435-350954d4fcfe","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":1593991996},{"type":"v1","addr":"192.168.123.109:6817","nonce":1593991996}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":1593991996},{"type":"v1","addr":"192.168.123.109:6819","nonce":1593991996}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":1593991996},{"type":"v1","addr":"192.168.123.109:6823","nonce":1593991996}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":1593991996},{"type":"v1","addr":"192.168.123.109:6821","nonce":1593991996}]},"public_addr":"192.168.123.109:6817/1593991996","cluster_addr":"192.168.123.109:6819/1593991996","heartbeat_back_addr":"192.168.123.109:6823/1593991996","heartbeat_front_addr":"192.168.123.109:6821/1593991996","state":["exists","up"]},{"osd":5,"uuid":"11f6c435-3f65-46bf-a53f-4c9da72c0aa3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6818","nonce":2068427564},{"type":"v1","addr":"192.168.123.106:6819","nonce":2068427564}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6820","nonce":2068427564},{"type":"v1","addr":"192.168.123.106:6821","nonce":2068427564}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6824","nonce":2068427564},{"type":"v1","addr":"192.168.123.106:6825","nonce":2068427564}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6822","nonce":2068427564},{"type":"v1","addr":"192.168.123.106:6823","nonce":2068427564}]},"public_addr":"192.168.123.106:6819/2068427564","cluster_addr":"192.168.123.106:6821/2068427564","heartbeat_back_addr":"192.168.123.106:6825/2068427564","heartbeat_front_addr":"192.168.123.106:6823/2068427564","state":["exists","up"]},{"osd":6,"uuid":"9d349e15-2ef2-47c0-87db-887b3e5b91c1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":176302434},{"type":"v1","addr":"192.168.123.109:6825","nonce":176302434}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":176302434},{"type":"v1","addr":"192.168.123.109:6827","nonce":176302434}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":176302434},{"type":"v1","addr":"192.168.123.109:6831","nonce":176302434}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":176302434},{"type":"v1","addr":"192.168.123.109:6829","nonce":176302434}]},"public_addr":"192.168.123.109:6825/176302434","cluster_addr":"192.168.123.109:6827/176302434","heartbeat_back_addr":"192.168.123.109:6831/176302434","heartbeat_front_addr":"192.168.123.109:6829/176302434","state":["exists","up"]},{"osd":7,"uuid":"96013d1a-8fdb-4e98-8244-f62c64e15111","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":22,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6826","nonce":3705412906},{"type":"v1","addr":"192.168.123.106:6827","nonce":3705412906}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6828","nonce":3705412906},{"type":"v1","addr":"192.168.123.106:6829","nonce":3705412906}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6832","nonce":3705412906},{"type":"v1","addr":"192.168.123.106:6833","nonce":3705412906}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6830","nonce":3705412906},{"type":"v1","addr":"192.168.123.106:6831","nonce":3705412906}]},"public_addr":"192.168.123.106:6827/3705412906","cluster_addr":"192.168.123.106:6829/3705412906","heartbeat_back_addr":"192.168.123.106:6833/3705412906","heartbeat_front_addr":"192.168.123.106:6831/3705412906","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:41.447081+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:42.705101+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:43.914941+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:43.492128+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:44.759789+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:45.354681+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:45.342701+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:46.956005+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.106:0/767330076":"2026-03-11T12:42:40.276904+0000","192.168.123.106:0/2002880734":"2026-03-11T12:42:53.374279+0000","192.168.123.106:6800/2737563506":"2026-03-11T12:42:40.276904+0000","192.168.123.106:0/1935277119":"2026-03-11T12:42:40.276904+0000","192.168.123.106:6800/3515922276":"2026-03-11T12:43:31.922790+0000","192.168.123.106:0/3951003069":"2026-03-11T12:43:31.922790+0000","192.168.123.106:0/721737214":"2026-03-11T12:42:40.276904+0000","192.168.123.106:6801/2737563506":"2026-03-11T12:42:40.276904+0000","192.168.123.106:0/2702629404":"2026-03-11T12:42:53.374279+0000","192.168.123.106:6800/702909729":"2026-03-11T12:42:53.374279+0000","192.168.123.106:0/627475541":"2026-03-11T12:43:31.922790+0000","192.168.123.106:6801/702909729":"2026-03-11T12:42:53.374279+0000","192.168.123.106:0/2401439199":"2026-03-11T12:43:31.922790+0000","192.168.123.106:0/3253637980":"2026-03-11T12:42:53.374279+0000","192.168.123.106:6801/3515922276":"2026-03-11T12:43:31.922790+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T12:45:18.021 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:17 vm06 bash[17497]: cluster 2026-03-10T12:45:15.944071+0000 mgr.vm06.cofomf (mgr.14193) 109 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:18.021 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:17 vm06 bash[17497]: cluster 2026-03-10T12:45:15.944071+0000 mgr.vm06.cofomf (mgr.14193) 109 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:18.021 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:17 vm06 bash[17497]: audit 2026-03-10T12:45:16.990725+0000 mon.vm06 (mon.0) 613 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:18.021 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:17 vm06 bash[17497]: audit 2026-03-10T12:45:16.990725+0000 mon.vm06 (mon.0) 613 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:18.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:17 vm09 bash[21409]: cluster 2026-03-10T12:45:15.944071+0000 mgr.vm06.cofomf (mgr.14193) 109 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:18.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:17 vm09 bash[21409]: cluster 2026-03-10T12:45:15.944071+0000 mgr.vm06.cofomf (mgr.14193) 109 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:18.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:17 vm09 bash[21409]: audit 2026-03-10T12:45:16.990725+0000 mon.vm06 (mon.0) 613 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:18.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:17 vm09 bash[21409]: audit 2026-03-10T12:45:16.990725+0000 mon.vm06 (mon.0) 613 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:19.064 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T12:45:19.064 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd dump --format=json 2026-03-10T12:45:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:19 vm06 bash[17497]: audit 2026-03-10T12:45:18.012065+0000 mon.vm06 (mon.0) 614 : audit [DBG] from='client.? 192.168.123.106:0/1941003869' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:45:19.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:19 vm06 bash[17497]: audit 2026-03-10T12:45:18.012065+0000 mon.vm06 (mon.0) 614 : audit [DBG] from='client.? 192.168.123.106:0/1941003869' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:45:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:19 vm09 bash[21409]: audit 2026-03-10T12:45:18.012065+0000 mon.vm06 (mon.0) 614 : audit [DBG] from='client.? 192.168.123.106:0/1941003869' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:45:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:19 vm09 bash[21409]: audit 2026-03-10T12:45:18.012065+0000 mon.vm06 (mon.0) 614 : audit [DBG] from='client.? 192.168.123.106:0/1941003869' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:45:20.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:20 vm09 bash[21409]: cluster 2026-03-10T12:45:17.944390+0000 mgr.vm06.cofomf (mgr.14193) 110 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:20.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:20 vm09 bash[21409]: cluster 2026-03-10T12:45:17.944390+0000 mgr.vm06.cofomf (mgr.14193) 110 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:20.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:20 vm06 bash[17497]: cluster 2026-03-10T12:45:17.944390+0000 mgr.vm06.cofomf (mgr.14193) 110 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:20.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:20 vm06 bash[17497]: cluster 2026-03-10T12:45:17.944390+0000 mgr.vm06.cofomf (mgr.14193) 110 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:21.547 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:21 vm06 bash[17497]: cluster 2026-03-10T12:45:19.944692+0000 mgr.vm06.cofomf (mgr.14193) 111 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:21.547 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:21 vm06 bash[17497]: cluster 2026-03-10T12:45:19.944692+0000 mgr.vm06.cofomf (mgr.14193) 111 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:21.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:21 vm09 bash[21409]: cluster 2026-03-10T12:45:19.944692+0000 mgr.vm06.cofomf (mgr.14193) 111 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:21.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:21 vm09 bash[21409]: cluster 2026-03-10T12:45:19.944692+0000 mgr.vm06.cofomf (mgr.14193) 111 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:23.709 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:24.021 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:45:24.021 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":24,"fsid":"68e2be40-1c7e-11f1-b779-df2955349a39","created":"2026-03-10T12:42:30.013872+0000","modified":"2026-03-10T12:44:51.420537+0000","last_up_change":"2026-03-10T12:44:48.412345+0000","last_in_change":"2026-03-10T12:44:27.405866+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":9,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T12:44:45.983537+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"24","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"7f2eb4cc-66ba-45fb-9311-be96c8a18633","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":920523896},{"type":"v1","addr":"192.168.123.109:6801","nonce":920523896}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":920523896},{"type":"v1","addr":"192.168.123.109:6803","nonce":920523896}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":920523896},{"type":"v1","addr":"192.168.123.109:6807","nonce":920523896}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":920523896},{"type":"v1","addr":"192.168.123.109:6805","nonce":920523896}]},"public_addr":"192.168.123.109:6801/920523896","cluster_addr":"192.168.123.109:6803/920523896","heartbeat_back_addr":"192.168.123.109:6807/920523896","heartbeat_front_addr":"192.168.123.109:6805/920523896","state":["exists","up"]},{"osd":1,"uuid":"bdbd3134-047c-4796-a7c4-704227861edc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":19,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6802","nonce":667081414},{"type":"v1","addr":"192.168.123.106:6803","nonce":667081414}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6804","nonce":667081414},{"type":"v1","addr":"192.168.123.106:6805","nonce":667081414}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6808","nonce":667081414},{"type":"v1","addr":"192.168.123.106:6809","nonce":667081414}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6806","nonce":667081414},{"type":"v1","addr":"192.168.123.106:6807","nonce":667081414}]},"public_addr":"192.168.123.106:6803/667081414","cluster_addr":"192.168.123.106:6805/667081414","heartbeat_back_addr":"192.168.123.106:6809/667081414","heartbeat_front_addr":"192.168.123.106:6807/667081414","state":["exists","up"]},{"osd":2,"uuid":"ac7e07e1-6b13-4553-a71e-9ffd56a18bd7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6810","nonce":1494747690},{"type":"v1","addr":"192.168.123.106:6811","nonce":1494747690}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6812","nonce":1494747690},{"type":"v1","addr":"192.168.123.106:6813","nonce":1494747690}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6816","nonce":1494747690},{"type":"v1","addr":"192.168.123.106:6817","nonce":1494747690}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6814","nonce":1494747690},{"type":"v1","addr":"192.168.123.106:6815","nonce":1494747690}]},"public_addr":"192.168.123.106:6811/1494747690","cluster_addr":"192.168.123.106:6813/1494747690","heartbeat_back_addr":"192.168.123.106:6817/1494747690","heartbeat_front_addr":"192.168.123.106:6815/1494747690","state":["exists","up"]},{"osd":3,"uuid":"fcac5ce6-457a-460f-a4b9-c37d8346929c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":1289218013},{"type":"v1","addr":"192.168.123.109:6809","nonce":1289218013}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":1289218013},{"type":"v1","addr":"192.168.123.109:6811","nonce":1289218013}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":1289218013},{"type":"v1","addr":"192.168.123.109:6815","nonce":1289218013}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":1289218013},{"type":"v1","addr":"192.168.123.109:6813","nonce":1289218013}]},"public_addr":"192.168.123.109:6809/1289218013","cluster_addr":"192.168.123.109:6811/1289218013","heartbeat_back_addr":"192.168.123.109:6815/1289218013","heartbeat_front_addr":"192.168.123.109:6813/1289218013","state":["exists","up"]},{"osd":4,"uuid":"ac094c73-334f-420d-9435-350954d4fcfe","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":1593991996},{"type":"v1","addr":"192.168.123.109:6817","nonce":1593991996}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":1593991996},{"type":"v1","addr":"192.168.123.109:6819","nonce":1593991996}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":1593991996},{"type":"v1","addr":"192.168.123.109:6823","nonce":1593991996}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":1593991996},{"type":"v1","addr":"192.168.123.109:6821","nonce":1593991996}]},"public_addr":"192.168.123.109:6817/1593991996","cluster_addr":"192.168.123.109:6819/1593991996","heartbeat_back_addr":"192.168.123.109:6823/1593991996","heartbeat_front_addr":"192.168.123.109:6821/1593991996","state":["exists","up"]},{"osd":5,"uuid":"11f6c435-3f65-46bf-a53f-4c9da72c0aa3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6818","nonce":2068427564},{"type":"v1","addr":"192.168.123.106:6819","nonce":2068427564}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6820","nonce":2068427564},{"type":"v1","addr":"192.168.123.106:6821","nonce":2068427564}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6824","nonce":2068427564},{"type":"v1","addr":"192.168.123.106:6825","nonce":2068427564}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6822","nonce":2068427564},{"type":"v1","addr":"192.168.123.106:6823","nonce":2068427564}]},"public_addr":"192.168.123.106:6819/2068427564","cluster_addr":"192.168.123.106:6821/2068427564","heartbeat_back_addr":"192.168.123.106:6825/2068427564","heartbeat_front_addr":"192.168.123.106:6823/2068427564","state":["exists","up"]},{"osd":6,"uuid":"9d349e15-2ef2-47c0-87db-887b3e5b91c1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":176302434},{"type":"v1","addr":"192.168.123.109:6825","nonce":176302434}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":176302434},{"type":"v1","addr":"192.168.123.109:6827","nonce":176302434}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":176302434},{"type":"v1","addr":"192.168.123.109:6831","nonce":176302434}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":176302434},{"type":"v1","addr":"192.168.123.109:6829","nonce":176302434}]},"public_addr":"192.168.123.109:6825/176302434","cluster_addr":"192.168.123.109:6827/176302434","heartbeat_back_addr":"192.168.123.109:6831/176302434","heartbeat_front_addr":"192.168.123.109:6829/176302434","state":["exists","up"]},{"osd":7,"uuid":"96013d1a-8fdb-4e98-8244-f62c64e15111","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":22,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6826","nonce":3705412906},{"type":"v1","addr":"192.168.123.106:6827","nonce":3705412906}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6828","nonce":3705412906},{"type":"v1","addr":"192.168.123.106:6829","nonce":3705412906}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6832","nonce":3705412906},{"type":"v1","addr":"192.168.123.106:6833","nonce":3705412906}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6830","nonce":3705412906},{"type":"v1","addr":"192.168.123.106:6831","nonce":3705412906}]},"public_addr":"192.168.123.106:6827/3705412906","cluster_addr":"192.168.123.106:6829/3705412906","heartbeat_back_addr":"192.168.123.106:6833/3705412906","heartbeat_front_addr":"192.168.123.106:6831/3705412906","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:41.447081+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:42.705101+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:43.914941+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:43.492128+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:44.759789+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:45.354681+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:45.342701+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:44:46.956005+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.106:0/767330076":"2026-03-11T12:42:40.276904+0000","192.168.123.106:0/2002880734":"2026-03-11T12:42:53.374279+0000","192.168.123.106:6800/2737563506":"2026-03-11T12:42:40.276904+0000","192.168.123.106:0/1935277119":"2026-03-11T12:42:40.276904+0000","192.168.123.106:6800/3515922276":"2026-03-11T12:43:31.922790+0000","192.168.123.106:0/3951003069":"2026-03-11T12:43:31.922790+0000","192.168.123.106:0/721737214":"2026-03-11T12:42:40.276904+0000","192.168.123.106:6801/2737563506":"2026-03-11T12:42:40.276904+0000","192.168.123.106:0/2702629404":"2026-03-11T12:42:53.374279+0000","192.168.123.106:6800/702909729":"2026-03-11T12:42:53.374279+0000","192.168.123.106:0/627475541":"2026-03-11T12:43:31.922790+0000","192.168.123.106:6801/702909729":"2026-03-11T12:42:53.374279+0000","192.168.123.106:0/2401439199":"2026-03-11T12:43:31.922790+0000","192.168.123.106:0/3253637980":"2026-03-11T12:42:53.374279+0000","192.168.123.106:6801/3515922276":"2026-03-11T12:43:31.922790+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T12:45:24.030 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:23 vm06 bash[17497]: cluster 2026-03-10T12:45:21.944921+0000 mgr.vm06.cofomf (mgr.14193) 112 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:24.030 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:23 vm06 bash[17497]: cluster 2026-03-10T12:45:21.944921+0000 mgr.vm06.cofomf (mgr.14193) 112 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:24.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:23 vm09 bash[21409]: cluster 2026-03-10T12:45:21.944921+0000 mgr.vm06.cofomf (mgr.14193) 112 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:24.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:23 vm09 bash[21409]: cluster 2026-03-10T12:45:21.944921+0000 mgr.vm06.cofomf (mgr.14193) 112 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:24.113 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph tell osd.0 flush_pg_stats 2026-03-10T12:45:24.113 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph tell osd.1 flush_pg_stats 2026-03-10T12:45:24.113 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph tell osd.2 flush_pg_stats 2026-03-10T12:45:24.114 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph tell osd.3 flush_pg_stats 2026-03-10T12:45:24.114 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph tell osd.4 flush_pg_stats 2026-03-10T12:45:24.114 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph tell osd.5 flush_pg_stats 2026-03-10T12:45:24.114 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph tell osd.6 flush_pg_stats 2026-03-10T12:45:24.114 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph tell osd.7 flush_pg_stats 2026-03-10T12:45:25.069 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:24 vm09 bash[21409]: audit 2026-03-10T12:45:24.022012+0000 mon.vm06 (mon.0) 615 : audit [DBG] from='client.? 192.168.123.106:0/21838296' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:45:25.069 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:24 vm09 bash[21409]: audit 2026-03-10T12:45:24.022012+0000 mon.vm06 (mon.0) 615 : audit [DBG] from='client.? 192.168.123.106:0/21838296' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:45:25.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:24 vm06 bash[17497]: audit 2026-03-10T12:45:24.022012+0000 mon.vm06 (mon.0) 615 : audit [DBG] from='client.? 192.168.123.106:0/21838296' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:45:25.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:24 vm06 bash[17497]: audit 2026-03-10T12:45:24.022012+0000 mon.vm06 (mon.0) 615 : audit [DBG] from='client.? 192.168.123.106:0/21838296' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:45:26.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:25 vm06 bash[17497]: cluster 2026-03-10T12:45:23.945193+0000 mgr.vm06.cofomf (mgr.14193) 113 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:26.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:25 vm06 bash[17497]: cluster 2026-03-10T12:45:23.945193+0000 mgr.vm06.cofomf (mgr.14193) 113 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:26.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:25 vm09 bash[21409]: cluster 2026-03-10T12:45:23.945193+0000 mgr.vm06.cofomf (mgr.14193) 113 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:26.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:25 vm09 bash[21409]: cluster 2026-03-10T12:45:23.945193+0000 mgr.vm06.cofomf (mgr.14193) 113 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:28.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:27 vm06 bash[17497]: cluster 2026-03-10T12:45:25.945451+0000 mgr.vm06.cofomf (mgr.14193) 114 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:28.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:27 vm06 bash[17497]: cluster 2026-03-10T12:45:25.945451+0000 mgr.vm06.cofomf (mgr.14193) 114 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:28.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:27 vm09 bash[21409]: cluster 2026-03-10T12:45:25.945451+0000 mgr.vm06.cofomf (mgr.14193) 114 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:28.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:27 vm09 bash[21409]: cluster 2026-03-10T12:45:25.945451+0000 mgr.vm06.cofomf (mgr.14193) 114 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:29.105 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:29.105 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:29.107 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:29.108 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:29.110 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:29.110 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:29.112 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:29.114 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:29.606 INFO:teuthology.orchestra.run.vm06.stdout:77309411338 2026-03-10T12:45:29.606 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd last-stat-seq osd.3 2026-03-10T12:45:29.849 INFO:teuthology.orchestra.run.vm06.stdout:77309411338 2026-03-10T12:45:29.849 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd last-stat-seq osd.1 2026-03-10T12:45:29.876 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:29 vm06 bash[17497]: cluster 2026-03-10T12:45:27.945676+0000 mgr.vm06.cofomf (mgr.14193) 115 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:29.876 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:29 vm06 bash[17497]: cluster 2026-03-10T12:45:27.945676+0000 mgr.vm06.cofomf (mgr.14193) 115 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:29.877 INFO:teuthology.orchestra.run.vm06.stdout:85899345930 2026-03-10T12:45:29.877 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd last-stat-seq osd.5 2026-03-10T12:45:29.901 INFO:teuthology.orchestra.run.vm06.stdout:68719476747 2026-03-10T12:45:29.901 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd last-stat-seq osd.0 2026-03-10T12:45:29.913 INFO:teuthology.orchestra.run.vm06.stdout:85899345930 2026-03-10T12:45:29.913 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd last-stat-seq osd.4 2026-03-10T12:45:29.954 INFO:teuthology.orchestra.run.vm06.stdout:90194313226 2026-03-10T12:45:29.954 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd last-stat-seq osd.7 2026-03-10T12:45:30.069 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:29 vm09 bash[21409]: cluster 2026-03-10T12:45:27.945676+0000 mgr.vm06.cofomf (mgr.14193) 115 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:30.069 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:29 vm09 bash[21409]: cluster 2026-03-10T12:45:27.945676+0000 mgr.vm06.cofomf (mgr.14193) 115 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:30.070 INFO:teuthology.orchestra.run.vm06.stdout:85899345930 2026-03-10T12:45:30.070 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd last-stat-seq osd.6 2026-03-10T12:45:30.088 INFO:teuthology.orchestra.run.vm06.stdout:81604378634 2026-03-10T12:45:30.088 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph osd last-stat-seq osd.2 2026-03-10T12:45:31.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:30 vm06 bash[17497]: cluster 2026-03-10T12:45:29.945937+0000 mgr.vm06.cofomf (mgr.14193) 116 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:31.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:30 vm06 bash[17497]: cluster 2026-03-10T12:45:29.945937+0000 mgr.vm06.cofomf (mgr.14193) 116 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:31.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:30 vm09 bash[21409]: cluster 2026-03-10T12:45:29.945937+0000 mgr.vm06.cofomf (mgr.14193) 116 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:31.109 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:30 vm09 bash[21409]: cluster 2026-03-10T12:45:29.945937+0000 mgr.vm06.cofomf (mgr.14193) 116 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:32 vm06 bash[17497]: audit 2026-03-10T12:45:31.990970+0000 mon.vm06 (mon.0) 616 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:32.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:32 vm06 bash[17497]: audit 2026-03-10T12:45:31.990970+0000 mon.vm06 (mon.0) 616 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:32.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:32 vm09 bash[21409]: audit 2026-03-10T12:45:31.990970+0000 mon.vm06 (mon.0) 616 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:32.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:32 vm09 bash[21409]: audit 2026-03-10T12:45:31.990970+0000 mon.vm06 (mon.0) 616 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:33 vm06 bash[17497]: cluster 2026-03-10T12:45:31.946157+0000 mgr.vm06.cofomf (mgr.14193) 117 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:33 vm06 bash[17497]: cluster 2026-03-10T12:45:31.946157+0000 mgr.vm06.cofomf (mgr.14193) 117 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:33.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:33 vm09 bash[21409]: cluster 2026-03-10T12:45:31.946157+0000 mgr.vm06.cofomf (mgr.14193) 117 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:33.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:33 vm09 bash[21409]: cluster 2026-03-10T12:45:31.946157+0000 mgr.vm06.cofomf (mgr.14193) 117 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:34.357 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:34.358 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:34.359 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:34.359 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:34.363 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:34.363 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:34.363 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:34.365 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:34.960 INFO:teuthology.orchestra.run.vm06.stdout:85899345931 2026-03-10T12:45:35.098 INFO:tasks.cephadm.ceph_manager.ceph:need seq 85899345930 got 85899345931 for osd.6 2026-03-10T12:45:35.098 DEBUG:teuthology.parallel:result is None 2026-03-10T12:45:35.114 INFO:teuthology.orchestra.run.vm06.stdout:85899345931 2026-03-10T12:45:35.145 INFO:teuthology.orchestra.run.vm06.stdout:68719476747 2026-03-10T12:45:35.229 INFO:tasks.cephadm.ceph_manager.ceph:need seq 85899345930 got 85899345931 for osd.5 2026-03-10T12:45:35.229 DEBUG:teuthology.parallel:result is None 2026-03-10T12:45:35.241 INFO:teuthology.orchestra.run.vm06.stdout:77309411339 2026-03-10T12:45:35.264 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476747 got 68719476747 for osd.0 2026-03-10T12:45:35.264 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:35 vm06 bash[17497]: cluster 2026-03-10T12:45:33.946434+0000 mgr.vm06.cofomf (mgr.14193) 118 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:35.264 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:35 vm06 bash[17497]: cluster 2026-03-10T12:45:33.946434+0000 mgr.vm06.cofomf (mgr.14193) 118 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:35.264 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:35 vm06 bash[17497]: audit 2026-03-10T12:45:34.954637+0000 mon.vm06 (mon.0) 617 : audit [DBG] from='client.? 192.168.123.106:0/2909044683' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T12:45:35.264 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:35 vm06 bash[17497]: audit 2026-03-10T12:45:34.954637+0000 mon.vm06 (mon.0) 617 : audit [DBG] from='client.? 192.168.123.106:0/2909044683' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T12:45:35.264 DEBUG:teuthology.parallel:result is None 2026-03-10T12:45:35.279 INFO:teuthology.orchestra.run.vm06.stdout:81604378635 2026-03-10T12:45:35.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:35 vm09 bash[21409]: cluster 2026-03-10T12:45:33.946434+0000 mgr.vm06.cofomf (mgr.14193) 118 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:35.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:35 vm09 bash[21409]: cluster 2026-03-10T12:45:33.946434+0000 mgr.vm06.cofomf (mgr.14193) 118 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:35.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:35 vm09 bash[21409]: audit 2026-03-10T12:45:34.954637+0000 mon.vm06 (mon.0) 617 : audit [DBG] from='client.? 192.168.123.106:0/2909044683' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T12:45:35.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:35 vm09 bash[21409]: audit 2026-03-10T12:45:34.954637+0000 mon.vm06 (mon.0) 617 : audit [DBG] from='client.? 192.168.123.106:0/2909044683' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T12:45:35.366 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411338 got 77309411339 for osd.1 2026-03-10T12:45:35.366 DEBUG:teuthology.parallel:result is None 2026-03-10T12:45:35.369 INFO:teuthology.orchestra.run.vm06.stdout:85899345931 2026-03-10T12:45:35.369 INFO:teuthology.orchestra.run.vm06.stdout:90194313227 2026-03-10T12:45:35.400 INFO:tasks.cephadm.ceph_manager.ceph:need seq 81604378634 got 81604378635 for osd.2 2026-03-10T12:45:35.400 DEBUG:teuthology.parallel:result is None 2026-03-10T12:45:35.404 INFO:teuthology.orchestra.run.vm06.stdout:77309411339 2026-03-10T12:45:35.491 INFO:tasks.cephadm.ceph_manager.ceph:need seq 90194313226 got 90194313227 for osd.7 2026-03-10T12:45:35.491 DEBUG:teuthology.parallel:result is None 2026-03-10T12:45:35.505 INFO:tasks.cephadm.ceph_manager.ceph:need seq 85899345930 got 85899345931 for osd.4 2026-03-10T12:45:35.505 DEBUG:teuthology.parallel:result is None 2026-03-10T12:45:35.537 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411338 got 77309411339 for osd.3 2026-03-10T12:45:35.537 DEBUG:teuthology.parallel:result is None 2026-03-10T12:45:35.537 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T12:45:35.537 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph pg dump --format=json 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.107649+0000 mon.vm06 (mon.0) 618 : audit [DBG] from='client.? 192.168.123.106:0/2405827823' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.107649+0000 mon.vm06 (mon.0) 618 : audit [DBG] from='client.? 192.168.123.106:0/2405827823' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.143499+0000 mon.vm09 (mon.1) 24 : audit [DBG] from='client.? 192.168.123.106:0/559792251' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.143499+0000 mon.vm09 (mon.1) 24 : audit [DBG] from='client.? 192.168.123.106:0/559792251' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.242185+0000 mon.vm06 (mon.0) 619 : audit [DBG] from='client.? 192.168.123.106:0/2794974275' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.242185+0000 mon.vm06 (mon.0) 619 : audit [DBG] from='client.? 192.168.123.106:0/2794974275' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.273656+0000 mon.vm06 (mon.0) 620 : audit [DBG] from='client.? 192.168.123.106:0/4214936204' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.273656+0000 mon.vm06 (mon.0) 620 : audit [DBG] from='client.? 192.168.123.106:0/4214936204' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.368993+0000 mon.vm06 (mon.0) 621 : audit [DBG] from='client.? 192.168.123.106:0/1429262252' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.368993+0000 mon.vm06 (mon.0) 621 : audit [DBG] from='client.? 192.168.123.106:0/1429262252' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.369660+0000 mon.vm06 (mon.0) 622 : audit [DBG] from='client.? 192.168.123.106:0/2539286890' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.369660+0000 mon.vm06 (mon.0) 622 : audit [DBG] from='client.? 192.168.123.106:0/2539286890' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.404639+0000 mon.vm06 (mon.0) 623 : audit [DBG] from='client.? 192.168.123.106:0/429309700' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T12:45:36.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:36 vm06 bash[17497]: audit 2026-03-10T12:45:35.404639+0000 mon.vm06 (mon.0) 623 : audit [DBG] from='client.? 192.168.123.106:0/429309700' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.107649+0000 mon.vm06 (mon.0) 618 : audit [DBG] from='client.? 192.168.123.106:0/2405827823' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.107649+0000 mon.vm06 (mon.0) 618 : audit [DBG] from='client.? 192.168.123.106:0/2405827823' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.143499+0000 mon.vm09 (mon.1) 24 : audit [DBG] from='client.? 192.168.123.106:0/559792251' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.143499+0000 mon.vm09 (mon.1) 24 : audit [DBG] from='client.? 192.168.123.106:0/559792251' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.242185+0000 mon.vm06 (mon.0) 619 : audit [DBG] from='client.? 192.168.123.106:0/2794974275' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.242185+0000 mon.vm06 (mon.0) 619 : audit [DBG] from='client.? 192.168.123.106:0/2794974275' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.273656+0000 mon.vm06 (mon.0) 620 : audit [DBG] from='client.? 192.168.123.106:0/4214936204' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.273656+0000 mon.vm06 (mon.0) 620 : audit [DBG] from='client.? 192.168.123.106:0/4214936204' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.368993+0000 mon.vm06 (mon.0) 621 : audit [DBG] from='client.? 192.168.123.106:0/1429262252' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.368993+0000 mon.vm06 (mon.0) 621 : audit [DBG] from='client.? 192.168.123.106:0/1429262252' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.369660+0000 mon.vm06 (mon.0) 622 : audit [DBG] from='client.? 192.168.123.106:0/2539286890' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.369660+0000 mon.vm06 (mon.0) 622 : audit [DBG] from='client.? 192.168.123.106:0/2539286890' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.404639+0000 mon.vm06 (mon.0) 623 : audit [DBG] from='client.? 192.168.123.106:0/429309700' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T12:45:36.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:36 vm09 bash[21409]: audit 2026-03-10T12:45:35.404639+0000 mon.vm06 (mon.0) 623 : audit [DBG] from='client.? 192.168.123.106:0/429309700' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T12:45:37.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:37 vm06 bash[17497]: cluster 2026-03-10T12:45:35.946706+0000 mgr.vm06.cofomf (mgr.14193) 119 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:37.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:37 vm06 bash[17497]: cluster 2026-03-10T12:45:35.946706+0000 mgr.vm06.cofomf (mgr.14193) 119 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:37.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:37 vm09 bash[21409]: cluster 2026-03-10T12:45:35.946706+0000 mgr.vm06.cofomf (mgr.14193) 119 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:37.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:37 vm09 bash[21409]: cluster 2026-03-10T12:45:35.946706+0000 mgr.vm06.cofomf (mgr.14193) 119 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:39.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:39 vm06 bash[17497]: cluster 2026-03-10T12:45:37.946933+0000 mgr.vm06.cofomf (mgr.14193) 120 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:39.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:39 vm06 bash[17497]: cluster 2026-03-10T12:45:37.946933+0000 mgr.vm06.cofomf (mgr.14193) 120 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:39.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:39 vm09 bash[21409]: cluster 2026-03-10T12:45:37.946933+0000 mgr.vm06.cofomf (mgr.14193) 120 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:39.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:39 vm09 bash[21409]: cluster 2026-03-10T12:45:37.946933+0000 mgr.vm06.cofomf (mgr.14193) 120 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:40.231 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:40.477 INFO:teuthology.orchestra.run.vm06.stderr:dumped all 2026-03-10T12:45:40.477 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:45:40.531 INFO:teuthology.orchestra.run.vm06.stdout:{"pg_ready":true,"pg_map":{"version":76,"stamp":"2026-03-10T12:45:39.947075+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":218272,"kb_used_data":3148,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167521120,"statfs":{"total":171765137408,"available":171541626880,"internally_reserved":0,"allocated":3223552,"data_stored":2048208,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001498"},"pg_stats":[{"pgid":"1.0","version":"23'32","reported_seq":57,"reported_epoch":24,"state":"active+clean","last_fresh":"2026-03-10T12:44:51.723603+0000","last_change":"2026-03-10T12:44:50.464059+0000","last_active":"2026-03-10T12:44:51.723603+0000","last_peered":"2026-03-10T12:44:51.723603+0000","last_clean":"2026-03-10T12:44:51.723603+0000","last_became_active":"2026-03-10T12:44:50.463911+0000","last_became_peered":"2026-03-10T12:44:50.463911+0000","last_unstale":"2026-03-10T12:44:51.723603+0000","last_undegraded":"2026-03-10T12:44:51.723603+0000","last_fullsized":"2026-03-10T12:44:51.723603+0000","mapping_epoch":22,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":23,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T12:44:46.387438+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T12:44:46.387438+0000","last_clean_scrub_stamp":"2026-03-10T12:44:46.387438+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:22:12.937844+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,2],"acting":[7,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":21,"seq":90194313228,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27648,"kb_used_data":676,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939776,"statfs":{"total":21470642176,"available":21442330624,"internally_reserved":0,"allocated":692224,"data_stored":543076,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1584,"internal_metadata":27458000},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":20,"seq":85899345932,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27064,"kb_used_data":224,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940360,"statfs":{"total":21470642176,"available":21442928640,"internally_reserved":0,"allocated":229376,"data_stored":83796,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":20,"seq":85899345932,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27064,"kb_used_data":224,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940360,"statfs":{"total":21470642176,"available":21442928640,"internally_reserved":0,"allocated":229376,"data_stored":83796,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":20,"seq":85899345932,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27064,"kb_used_data":224,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940360,"statfs":{"total":21470642176,"available":21442928640,"internally_reserved":0,"allocated":229376,"data_stored":83796,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":19,"seq":81604378636,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27648,"kb_used_data":676,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939776,"statfs":{"total":21470642176,"available":21442330624,"internally_reserved":0,"allocated":692224,"data_stored":543076,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":18,"seq":77309411340,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27068,"kb_used_data":224,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940356,"statfs":{"total":21470642176,"available":21442924544,"internally_reserved":0,"allocated":229376,"data_stored":83796,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":18,"seq":77309411340,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27072,"kb_used_data":224,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940352,"statfs":{"total":21470642176,"available":21442920448,"internally_reserved":0,"allocated":229376,"data_stored":83796,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":16,"seq":68719476749,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27644,"kb_used_data":676,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939780,"statfs":{"total":21470642176,"available":21442334720,"internally_reserved":0,"allocated":692224,"data_stored":543076,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T12:45:40.531 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph pg dump --format=json 2026-03-10T12:45:41.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:41 vm06 bash[17497]: cluster 2026-03-10T12:45:39.947198+0000 mgr.vm06.cofomf (mgr.14193) 121 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:41.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:41 vm06 bash[17497]: cluster 2026-03-10T12:45:39.947198+0000 mgr.vm06.cofomf (mgr.14193) 121 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:41.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:41 vm06 bash[17497]: audit 2026-03-10T12:45:40.477342+0000 mgr.vm06.cofomf (mgr.14193) 122 : audit [DBG] from='client.14422 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:45:41.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:41 vm06 bash[17497]: audit 2026-03-10T12:45:40.477342+0000 mgr.vm06.cofomf (mgr.14193) 122 : audit [DBG] from='client.14422 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:45:41.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:41 vm09 bash[21409]: cluster 2026-03-10T12:45:39.947198+0000 mgr.vm06.cofomf (mgr.14193) 121 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:41.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:41 vm09 bash[21409]: cluster 2026-03-10T12:45:39.947198+0000 mgr.vm06.cofomf (mgr.14193) 121 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:41.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:41 vm09 bash[21409]: audit 2026-03-10T12:45:40.477342+0000 mgr.vm06.cofomf (mgr.14193) 122 : audit [DBG] from='client.14422 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:45:41.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:41 vm09 bash[21409]: audit 2026-03-10T12:45:40.477342+0000 mgr.vm06.cofomf (mgr.14193) 122 : audit [DBG] from='client.14422 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:45:43.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:43 vm06 bash[17497]: cluster 2026-03-10T12:45:41.947426+0000 mgr.vm06.cofomf (mgr.14193) 123 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:43.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:43 vm06 bash[17497]: cluster 2026-03-10T12:45:41.947426+0000 mgr.vm06.cofomf (mgr.14193) 123 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:43.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:43 vm09 bash[21409]: cluster 2026-03-10T12:45:41.947426+0000 mgr.vm06.cofomf (mgr.14193) 123 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:43.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:43 vm09 bash[21409]: cluster 2026-03-10T12:45:41.947426+0000 mgr.vm06.cofomf (mgr.14193) 123 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:44.279 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:44.515 INFO:teuthology.orchestra.run.vm06.stderr:dumped all 2026-03-10T12:45:44.515 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:45:44.567 INFO:teuthology.orchestra.run.vm06.stdout:{"pg_ready":true,"pg_map":{"version":78,"stamp":"2026-03-10T12:45:43.947577+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":218272,"kb_used_data":3148,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167521120,"statfs":{"total":171765137408,"available":171541626880,"internally_reserved":0,"allocated":3223552,"data_stored":2048208,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001513"},"pg_stats":[{"pgid":"1.0","version":"23'32","reported_seq":57,"reported_epoch":24,"state":"active+clean","last_fresh":"2026-03-10T12:44:51.723603+0000","last_change":"2026-03-10T12:44:50.464059+0000","last_active":"2026-03-10T12:44:51.723603+0000","last_peered":"2026-03-10T12:44:51.723603+0000","last_clean":"2026-03-10T12:44:51.723603+0000","last_became_active":"2026-03-10T12:44:50.463911+0000","last_became_peered":"2026-03-10T12:44:50.463911+0000","last_unstale":"2026-03-10T12:44:51.723603+0000","last_undegraded":"2026-03-10T12:44:51.723603+0000","last_fullsized":"2026-03-10T12:44:51.723603+0000","mapping_epoch":22,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":23,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T12:44:46.387438+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T12:44:46.387438+0000","last_clean_scrub_stamp":"2026-03-10T12:44:46.387438+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:22:12.937844+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,2],"acting":[7,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":21,"seq":90194313229,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27648,"kb_used_data":676,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939776,"statfs":{"total":21470642176,"available":21442330624,"internally_reserved":0,"allocated":692224,"data_stored":543076,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1584,"internal_metadata":27458000},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":20,"seq":85899345933,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27064,"kb_used_data":224,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940360,"statfs":{"total":21470642176,"available":21442928640,"internally_reserved":0,"allocated":229376,"data_stored":83796,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":20,"seq":85899345933,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27064,"kb_used_data":224,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940360,"statfs":{"total":21470642176,"available":21442928640,"internally_reserved":0,"allocated":229376,"data_stored":83796,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":20,"seq":85899345933,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27064,"kb_used_data":224,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940360,"statfs":{"total":21470642176,"available":21442928640,"internally_reserved":0,"allocated":229376,"data_stored":83796,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":19,"seq":81604378637,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27648,"kb_used_data":676,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939776,"statfs":{"total":21470642176,"available":21442330624,"internally_reserved":0,"allocated":692224,"data_stored":543076,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":18,"seq":77309411341,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27068,"kb_used_data":224,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940356,"statfs":{"total":21470642176,"available":21442924544,"internally_reserved":0,"allocated":229376,"data_stored":83796,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":18,"seq":77309411341,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27072,"kb_used_data":224,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940352,"statfs":{"total":21470642176,"available":21442920448,"internally_reserved":0,"allocated":229376,"data_stored":83796,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":16,"seq":68719476749,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27644,"kb_used_data":676,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939780,"statfs":{"total":21470642176,"available":21442334720,"internally_reserved":0,"allocated":692224,"data_stored":543076,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T12:45:44.567 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T12:45:44.567 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T12:45:44.567 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T12:45:44.567 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph health --format=json 2026-03-10T12:45:45.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:45 vm09 bash[21409]: cluster 2026-03-10T12:45:43.947724+0000 mgr.vm06.cofomf (mgr.14193) 124 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:45.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:45 vm09 bash[21409]: cluster 2026-03-10T12:45:43.947724+0000 mgr.vm06.cofomf (mgr.14193) 124 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:45.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:45 vm09 bash[21409]: audit 2026-03-10T12:45:44.515990+0000 mgr.vm06.cofomf (mgr.14193) 125 : audit [DBG] from='client.14426 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:45:45.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:45 vm09 bash[21409]: audit 2026-03-10T12:45:44.515990+0000 mgr.vm06.cofomf (mgr.14193) 125 : audit [DBG] from='client.14426 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:45:45.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:45 vm06 bash[17497]: cluster 2026-03-10T12:45:43.947724+0000 mgr.vm06.cofomf (mgr.14193) 124 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:45.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:45 vm06 bash[17497]: cluster 2026-03-10T12:45:43.947724+0000 mgr.vm06.cofomf (mgr.14193) 124 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:45.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:45 vm06 bash[17497]: audit 2026-03-10T12:45:44.515990+0000 mgr.vm06.cofomf (mgr.14193) 125 : audit [DBG] from='client.14426 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:45:45.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:45 vm06 bash[17497]: audit 2026-03-10T12:45:44.515990+0000 mgr.vm06.cofomf (mgr.14193) 125 : audit [DBG] from='client.14426 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:45:47.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:47 vm06 bash[17497]: cluster 2026-03-10T12:45:45.947996+0000 mgr.vm06.cofomf (mgr.14193) 126 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:47.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:47 vm06 bash[17497]: cluster 2026-03-10T12:45:45.947996+0000 mgr.vm06.cofomf (mgr.14193) 126 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:47.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:47 vm06 bash[17497]: audit 2026-03-10T12:45:46.991092+0000 mon.vm06 (mon.0) 624 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:47.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:47 vm06 bash[17497]: audit 2026-03-10T12:45:46.991092+0000 mon.vm06 (mon.0) 624 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:47.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:47 vm09 bash[21409]: cluster 2026-03-10T12:45:45.947996+0000 mgr.vm06.cofomf (mgr.14193) 126 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:47.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:47 vm09 bash[21409]: cluster 2026-03-10T12:45:45.947996+0000 mgr.vm06.cofomf (mgr.14193) 126 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:47.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:47 vm09 bash[21409]: audit 2026-03-10T12:45:46.991092+0000 mon.vm06 (mon.0) 624 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:47.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:47 vm09 bash[21409]: audit 2026-03-10T12:45:46.991092+0000 mon.vm06 (mon.0) 624 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:45:48.319 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:48.581 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T12:45:48.581 INFO:teuthology.orchestra.run.vm06.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T12:45:48.632 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T12:45:48.632 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T12:45:48.632 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T12:45:48.634 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm06.local 2026-03-10T12:45:48.634 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- bash -c 'ceph orch status' 2026-03-10T12:45:49.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:49 vm06 bash[17497]: cluster 2026-03-10T12:45:47.948270+0000 mgr.vm06.cofomf (mgr.14193) 127 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:49.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:49 vm06 bash[17497]: cluster 2026-03-10T12:45:47.948270+0000 mgr.vm06.cofomf (mgr.14193) 127 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:49.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:49 vm06 bash[17497]: audit 2026-03-10T12:45:48.582189+0000 mon.vm06 (mon.0) 625 : audit [DBG] from='client.? 192.168.123.106:0/2662210811' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T12:45:49.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:49 vm06 bash[17497]: audit 2026-03-10T12:45:48.582189+0000 mon.vm06 (mon.0) 625 : audit [DBG] from='client.? 192.168.123.106:0/2662210811' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T12:45:49.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:49 vm09 bash[21409]: cluster 2026-03-10T12:45:47.948270+0000 mgr.vm06.cofomf (mgr.14193) 127 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:49.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:49 vm09 bash[21409]: cluster 2026-03-10T12:45:47.948270+0000 mgr.vm06.cofomf (mgr.14193) 127 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:49.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:49 vm09 bash[21409]: audit 2026-03-10T12:45:48.582189+0000 mon.vm06 (mon.0) 625 : audit [DBG] from='client.? 192.168.123.106:0/2662210811' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T12:45:49.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:49 vm09 bash[21409]: audit 2026-03-10T12:45:48.582189+0000 mon.vm06 (mon.0) 625 : audit [DBG] from='client.? 192.168.123.106:0/2662210811' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T12:45:51.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:51 vm06 bash[17497]: cluster 2026-03-10T12:45:49.948535+0000 mgr.vm06.cofomf (mgr.14193) 128 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:51.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:51 vm06 bash[17497]: cluster 2026-03-10T12:45:49.948535+0000 mgr.vm06.cofomf (mgr.14193) 128 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:51 vm09 bash[21409]: cluster 2026-03-10T12:45:49.948535+0000 mgr.vm06.cofomf (mgr.14193) 128 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:51.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:51 vm09 bash[21409]: cluster 2026-03-10T12:45:49.948535+0000 mgr.vm06.cofomf (mgr.14193) 128 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:52.361 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:52.614 INFO:teuthology.orchestra.run.vm06.stdout:Backend: cephadm 2026-03-10T12:45:52.614 INFO:teuthology.orchestra.run.vm06.stdout:Available: Yes 2026-03-10T12:45:52.614 INFO:teuthology.orchestra.run.vm06.stdout:Paused: No 2026-03-10T12:45:52.671 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- bash -c 'ceph orch ps' 2026-03-10T12:45:53.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:53 vm06 bash[17497]: cluster 2026-03-10T12:45:51.948788+0000 mgr.vm06.cofomf (mgr.14193) 129 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:53.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:53 vm06 bash[17497]: cluster 2026-03-10T12:45:51.948788+0000 mgr.vm06.cofomf (mgr.14193) 129 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:53.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:53 vm06 bash[17497]: audit 2026-03-10T12:45:52.614793+0000 mgr.vm06.cofomf (mgr.14193) 130 : audit [DBG] from='client.14434 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:45:53.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:53 vm06 bash[17497]: audit 2026-03-10T12:45:52.614793+0000 mgr.vm06.cofomf (mgr.14193) 130 : audit [DBG] from='client.14434 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:45:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:53 vm09 bash[21409]: cluster 2026-03-10T12:45:51.948788+0000 mgr.vm06.cofomf (mgr.14193) 129 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:53 vm09 bash[21409]: cluster 2026-03-10T12:45:51.948788+0000 mgr.vm06.cofomf (mgr.14193) 129 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:53 vm09 bash[21409]: audit 2026-03-10T12:45:52.614793+0000 mgr.vm06.cofomf (mgr.14193) 130 : audit [DBG] from='client.14434 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:45:53.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:53 vm09 bash[21409]: audit 2026-03-10T12:45:52.614793+0000 mgr.vm06.cofomf (mgr.14193) 130 : audit [DBG] from='client.14434 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:45:55.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:55 vm09 bash[21409]: cluster 2026-03-10T12:45:53.949034+0000 mgr.vm06.cofomf (mgr.14193) 131 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:55.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:55 vm09 bash[21409]: cluster 2026-03-10T12:45:53.949034+0000 mgr.vm06.cofomf (mgr.14193) 131 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:55.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:55 vm06 bash[17497]: cluster 2026-03-10T12:45:53.949034+0000 mgr.vm06.cofomf (mgr.14193) 131 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:55.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:55 vm06 bash[17497]: cluster 2026-03-10T12:45:53.949034+0000 mgr.vm06.cofomf (mgr.14193) 131 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:56.390 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:alertmanager.vm06 vm06 *:9093,9094 running (2m) 62s ago 2m 14.7M - 0.25.0 c8568f914cd2 d108de01b171 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:ceph-exporter.vm06 vm06 *:9926 running (2m) 62s ago 2m 8236k - 19.2.3-678-ge911bdeb 654f31e6858e d4f326ac6f19 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:ceph-exporter.vm09 vm09 *:9926 running (2m) 63s ago 2m 6276k - 19.2.3-678-ge911bdeb 654f31e6858e dbda4e85d017 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:crash.vm06 vm06 running (2m) 62s ago 2m 7300k - 19.2.3-678-ge911bdeb 654f31e6858e d63cc854a00b 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:crash.vm09 vm09 running (2m) 63s ago 2m 7304k - 19.2.3-678-ge911bdeb 654f31e6858e 67bd643ad13e 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:grafana.vm06 vm06 *:3000 running (2m) 62s ago 2m 62.5M - 10.4.0 c8b91775d855 27184972028c 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:mgr.vm06.cofomf vm06 *:9283,8765,8443 running (3m) 62s ago 3m 523M - 19.2.3-678-ge911bdeb 654f31e6858e 30170d412316 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:mgr.vm09.mcduck vm09 *:8443,9283,8765 running (2m) 63s ago 2m 464M - 19.2.3-678-ge911bdeb 654f31e6858e 5701589d930f 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:mon.vm06 vm06 running (3m) 62s ago 3m 45.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e f21fdbe2b119 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:mon.vm09 vm09 running (2m) 63s ago 2m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e1bfd103d923 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:node-exporter.vm06 vm06 *:9100 running (2m) 62s ago 2m 7392k - 1.7.0 72c9c2088986 4593d067933a 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:node-exporter.vm09 vm09 *:9100 running (2m) 63s ago 2m 7316k - 1.7.0 72c9c2088986 806bcb363bb7 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:osd.0 vm09 running (77s) 63s ago 79s 32.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8a04d22c3763 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:osd.1 vm06 running (77s) 62s ago 79s 51.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c9313553e715 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:osd.2 vm06 running (75s) 62s ago 77s 52.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 6aa71f1ea13a 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:osd.3 vm09 running (76s) 63s ago 78s 28.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 98b31acc45db 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:osd.4 vm09 running (74s) 63s ago 77s 50.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7f0abb2a9dc8 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:osd.5 vm06 running (74s) 62s ago 76s 49.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c9777115d415 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:osd.6 vm09 running (73s) 63s ago 75s 27.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 627e24a2751a 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:osd.7 vm06 running (73s) 62s ago 74s 33.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c2a7280fb0de 2026-03-10T12:45:56.641 INFO:teuthology.orchestra.run.vm06.stdout:prometheus.vm06 vm06 *:9095 running (2m) 62s ago 2m 31.0M - 2.51.0 1d3b7f56885b b305ca4c61b2 2026-03-10T12:45:56.693 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- bash -c 'ceph orch ls' 2026-03-10T12:45:57.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:57 vm06 bash[17497]: cluster 2026-03-10T12:45:55.949272+0000 mgr.vm06.cofomf (mgr.14193) 132 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:57.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:57 vm06 bash[17497]: cluster 2026-03-10T12:45:55.949272+0000 mgr.vm06.cofomf (mgr.14193) 132 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:57.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:57 vm06 bash[17497]: audit 2026-03-10T12:45:56.637618+0000 mgr.vm06.cofomf (mgr.14193) 133 : audit [DBG] from='client.14438 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:45:57.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:57 vm06 bash[17497]: audit 2026-03-10T12:45:56.637618+0000 mgr.vm06.cofomf (mgr.14193) 133 : audit [DBG] from='client.14438 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:45:57.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:57 vm09 bash[21409]: cluster 2026-03-10T12:45:55.949272+0000 mgr.vm06.cofomf (mgr.14193) 132 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:57.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:57 vm09 bash[21409]: cluster 2026-03-10T12:45:55.949272+0000 mgr.vm06.cofomf (mgr.14193) 132 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:57.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:57 vm09 bash[21409]: audit 2026-03-10T12:45:56.637618+0000 mgr.vm06.cofomf (mgr.14193) 133 : audit [DBG] from='client.14438 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:45:57.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:57 vm09 bash[21409]: audit 2026-03-10T12:45:56.637618+0000 mgr.vm06.cofomf (mgr.14193) 133 : audit [DBG] from='client.14438 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:45:59.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:59 vm06 bash[17497]: cluster 2026-03-10T12:45:57.949513+0000 mgr.vm06.cofomf (mgr.14193) 134 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:59.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:45:59 vm06 bash[17497]: cluster 2026-03-10T12:45:57.949513+0000 mgr.vm06.cofomf (mgr.14193) 134 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:59.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:59 vm09 bash[21409]: cluster 2026-03-10T12:45:57.949513+0000 mgr.vm06.cofomf (mgr.14193) 134 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:45:59.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:45:59 vm09 bash[21409]: cluster 2026-03-10T12:45:57.949513+0000 mgr.vm06.cofomf (mgr.14193) 134 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:00 vm09 bash[21409]: audit 2026-03-10T12:45:59.487418+0000 mon.vm06 (mon.0) 626 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:46:00.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:00 vm09 bash[21409]: audit 2026-03-10T12:45:59.487418+0000 mon.vm06 (mon.0) 626 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:46:00.420 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:46:00.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:00 vm06 bash[17497]: audit 2026-03-10T12:45:59.487418+0000 mon.vm06 (mon.0) 626 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:46:00.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:00 vm06 bash[17497]: audit 2026-03-10T12:45:59.487418+0000 mon.vm06 (mon.0) 626 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:46:00.660 INFO:teuthology.orchestra.run.vm06.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T12:46:00.660 INFO:teuthology.orchestra.run.vm06.stdout:alertmanager ?:9093,9094 1/1 66s ago 3m count:1 2026-03-10T12:46:00.660 INFO:teuthology.orchestra.run.vm06.stdout:ceph-exporter ?:9926 2/2 67s ago 3m * 2026-03-10T12:46:00.660 INFO:teuthology.orchestra.run.vm06.stdout:crash 2/2 67s ago 3m * 2026-03-10T12:46:00.660 INFO:teuthology.orchestra.run.vm06.stdout:grafana ?:3000 1/1 66s ago 3m count:1 2026-03-10T12:46:00.660 INFO:teuthology.orchestra.run.vm06.stdout:mgr 2/2 67s ago 3m count:2 2026-03-10T12:46:00.660 INFO:teuthology.orchestra.run.vm06.stdout:mon 2/2 67s ago 2m vm06:192.168.123.106=vm06;vm09:192.168.123.109=vm09;count:2 2026-03-10T12:46:00.660 INFO:teuthology.orchestra.run.vm06.stdout:node-exporter ?:9100 2/2 67s ago 3m * 2026-03-10T12:46:00.660 INFO:teuthology.orchestra.run.vm06.stdout:osd.all-available-devices 8 67s ago 117s * 2026-03-10T12:46:00.660 INFO:teuthology.orchestra.run.vm06.stdout:prometheus ?:9095 1/1 66s ago 3m count:1 2026-03-10T12:46:00.713 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- bash -c 'ceph orch host ls' 2026-03-10T12:46:01.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:01 vm06 bash[17497]: cluster 2026-03-10T12:45:59.949794+0000 mgr.vm06.cofomf (mgr.14193) 135 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:01.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:01 vm06 bash[17497]: cluster 2026-03-10T12:45:59.949794+0000 mgr.vm06.cofomf (mgr.14193) 135 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:01.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:01 vm06 bash[17497]: audit 2026-03-10T12:46:00.658715+0000 mgr.vm06.cofomf (mgr.14193) 136 : audit [DBG] from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:01.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:01 vm06 bash[17497]: audit 2026-03-10T12:46:00.658715+0000 mgr.vm06.cofomf (mgr.14193) 136 : audit [DBG] from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:01.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:01 vm09 bash[21409]: cluster 2026-03-10T12:45:59.949794+0000 mgr.vm06.cofomf (mgr.14193) 135 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:01.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:01 vm09 bash[21409]: cluster 2026-03-10T12:45:59.949794+0000 mgr.vm06.cofomf (mgr.14193) 135 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:01.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:01 vm09 bash[21409]: audit 2026-03-10T12:46:00.658715+0000 mgr.vm06.cofomf (mgr.14193) 136 : audit [DBG] from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:01.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:01 vm09 bash[21409]: audit 2026-03-10T12:46:00.658715+0000 mgr.vm06.cofomf (mgr.14193) 136 : audit [DBG] from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:02.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:02 vm06 bash[17497]: audit 2026-03-10T12:46:01.991263+0000 mon.vm06 (mon.0) 627 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:46:02.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:02 vm06 bash[17497]: audit 2026-03-10T12:46:01.991263+0000 mon.vm06 (mon.0) 627 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:46:02.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:02 vm09 bash[21409]: audit 2026-03-10T12:46:01.991263+0000 mon.vm06 (mon.0) 627 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:46:02.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:02 vm09 bash[21409]: audit 2026-03-10T12:46:01.991263+0000 mon.vm06 (mon.0) 627 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:46:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:03 vm06 bash[17497]: cluster 2026-03-10T12:46:01.950046+0000 mgr.vm06.cofomf (mgr.14193) 137 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:03.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:03 vm06 bash[17497]: cluster 2026-03-10T12:46:01.950046+0000 mgr.vm06.cofomf (mgr.14193) 137 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:03.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:03 vm09 bash[21409]: cluster 2026-03-10T12:46:01.950046+0000 mgr.vm06.cofomf (mgr.14193) 137 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:03.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:03 vm09 bash[21409]: cluster 2026-03-10T12:46:01.950046+0000 mgr.vm06.cofomf (mgr.14193) 137 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:04.454 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:46:04.698 INFO:teuthology.orchestra.run.vm06.stdout:HOST ADDR LABELS STATUS 2026-03-10T12:46:04.698 INFO:teuthology.orchestra.run.vm06.stdout:vm06 192.168.123.106 2026-03-10T12:46:04.698 INFO:teuthology.orchestra.run.vm06.stdout:vm09 192.168.123.109 2026-03-10T12:46:04.698 INFO:teuthology.orchestra.run.vm06.stdout:2 hosts in cluster 2026-03-10T12:46:04.751 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- bash -c 'ceph orch device ls' 2026-03-10T12:46:05.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: cluster 2026-03-10T12:46:03.950307+0000 mgr.vm06.cofomf (mgr.14193) 138 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:05.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: cluster 2026-03-10T12:46:03.950307+0000 mgr.vm06.cofomf (mgr.14193) 138 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:05.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:03.981420+0000 mon.vm06 (mon.0) 628 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:03.981420+0000 mon.vm06 (mon.0) 628 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:03.986402+0000 mon.vm06 (mon.0) 629 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:03.986402+0000 mon.vm06 (mon.0) 629 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.527029+0000 mon.vm06 (mon.0) 630 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.527029+0000 mon.vm06 (mon.0) 630 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.530577+0000 mon.vm06 (mon.0) 631 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.530577+0000 mon.vm06 (mon.0) 631 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.698709+0000 mgr.vm06.cofomf (mgr.14193) 139 : audit [DBG] from='client.24297 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:05.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.698709+0000 mgr.vm06.cofomf (mgr.14193) 139 : audit [DBG] from='client.24297 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:05.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.825809+0000 mon.vm06 (mon.0) 632 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:46:05.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.825809+0000 mon.vm06 (mon.0) 632 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:46:05.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.826404+0000 mon.vm06 (mon.0) 633 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:46:05.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.826404+0000 mon.vm06 (mon.0) 633 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:46:05.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.831490+0000 mon.vm06 (mon.0) 634 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.831490+0000 mon.vm06 (mon.0) 634 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.833161+0000 mon.vm06 (mon.0) 635 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:46:05.098 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:04 vm06 bash[17497]: audit 2026-03-10T12:46:04.833161+0000 mon.vm06 (mon.0) 635 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: cluster 2026-03-10T12:46:03.950307+0000 mgr.vm06.cofomf (mgr.14193) 138 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: cluster 2026-03-10T12:46:03.950307+0000 mgr.vm06.cofomf (mgr.14193) 138 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:03.981420+0000 mon.vm06 (mon.0) 628 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:03.981420+0000 mon.vm06 (mon.0) 628 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:03.986402+0000 mon.vm06 (mon.0) 629 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:03.986402+0000 mon.vm06 (mon.0) 629 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.527029+0000 mon.vm06 (mon.0) 630 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.527029+0000 mon.vm06 (mon.0) 630 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.530577+0000 mon.vm06 (mon.0) 631 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.530577+0000 mon.vm06 (mon.0) 631 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.698709+0000 mgr.vm06.cofomf (mgr.14193) 139 : audit [DBG] from='client.24297 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.698709+0000 mgr.vm06.cofomf (mgr.14193) 139 : audit [DBG] from='client.24297 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.825809+0000 mon.vm06 (mon.0) 632 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.825809+0000 mon.vm06 (mon.0) 632 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.826404+0000 mon.vm06 (mon.0) 633 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.826404+0000 mon.vm06 (mon.0) 633 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.831490+0000 mon.vm06 (mon.0) 634 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.831490+0000 mon.vm06 (mon.0) 634 : audit [INF] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' 2026-03-10T12:46:05.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.833161+0000 mon.vm06 (mon.0) 635 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:46:05.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:04 vm09 bash[21409]: audit 2026-03-10T12:46:04.833161+0000 mon.vm06 (mon.0) 635 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:46:07.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:06 vm06 bash[17497]: cluster 2026-03-10T12:46:05.950574+0000 mgr.vm06.cofomf (mgr.14193) 140 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:07.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:06 vm06 bash[17497]: cluster 2026-03-10T12:46:05.950574+0000 mgr.vm06.cofomf (mgr.14193) 140 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:07.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:06 vm09 bash[21409]: cluster 2026-03-10T12:46:05.950574+0000 mgr.vm06.cofomf (mgr.14193) 140 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:07.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:06 vm09 bash[21409]: cluster 2026-03-10T12:46:05.950574+0000 mgr.vm06.cofomf (mgr.14193) 140 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:08.488 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:46:08.732 INFO:teuthology.orchestra.run.vm06.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T12:46:08.732 INFO:teuthology.orchestra.run.vm06.stdout:vm06 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 69s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T12:46:08.732 INFO:teuthology.orchestra.run.vm06.stdout:vm06 /dev/vdb hdd DWNBRSTVMM06001 20.0G No 69s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:08.732 INFO:teuthology.orchestra.run.vm06.stdout:vm06 /dev/vdc hdd DWNBRSTVMM06002 20.0G No 69s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:08.732 INFO:teuthology.orchestra.run.vm06.stdout:vm06 /dev/vdd hdd DWNBRSTVMM06003 20.0G No 69s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:08.732 INFO:teuthology.orchestra.run.vm06.stdout:vm06 /dev/vde hdd DWNBRSTVMM06004 20.0G No 69s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:08.732 INFO:teuthology.orchestra.run.vm06.stdout:vm09 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 69s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T12:46:08.732 INFO:teuthology.orchestra.run.vm06.stdout:vm09 /dev/vdb hdd DWNBRSTVMM09001 20.0G No 69s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:08.732 INFO:teuthology.orchestra.run.vm06.stdout:vm09 /dev/vdc hdd DWNBRSTVMM09002 20.0G No 69s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:08.732 INFO:teuthology.orchestra.run.vm06.stdout:vm09 /dev/vdd hdd DWNBRSTVMM09003 20.0G No 69s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:08.732 INFO:teuthology.orchestra.run.vm06.stdout:vm09 /dev/vde hdd DWNBRSTVMM09004 20.0G No 69s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:08.783 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T12:46:08.785 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm06.local 2026-03-10T12:46:08.785 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- bash -c 'stat -c '"'"'%u %g'"'"' /var/log/ceph | grep '"'"'167 167'"'"'' 2026-03-10T12:46:09.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:08 vm06 bash[17497]: cluster 2026-03-10T12:46:07.950807+0000 mgr.vm06.cofomf (mgr.14193) 141 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:09.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:08 vm06 bash[17497]: cluster 2026-03-10T12:46:07.950807+0000 mgr.vm06.cofomf (mgr.14193) 141 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:09.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:08 vm06 bash[17497]: audit 2026-03-10T12:46:08.731770+0000 mgr.vm06.cofomf (mgr.14193) 142 : audit [DBG] from='client.24301 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:09.097 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:08 vm06 bash[17497]: audit 2026-03-10T12:46:08.731770+0000 mgr.vm06.cofomf (mgr.14193) 142 : audit [DBG] from='client.24301 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:09.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:08 vm09 bash[21409]: cluster 2026-03-10T12:46:07.950807+0000 mgr.vm06.cofomf (mgr.14193) 141 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:09.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:08 vm09 bash[21409]: cluster 2026-03-10T12:46:07.950807+0000 mgr.vm06.cofomf (mgr.14193) 141 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:09.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:08 vm09 bash[21409]: audit 2026-03-10T12:46:08.731770+0000 mgr.vm06.cofomf (mgr.14193) 142 : audit [DBG] from='client.24301 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:09.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:08 vm09 bash[21409]: audit 2026-03-10T12:46:08.731770+0000 mgr.vm06.cofomf (mgr.14193) 142 : audit [DBG] from='client.24301 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:11.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:11 vm06 bash[17497]: cluster 2026-03-10T12:46:09.951058+0000 mgr.vm06.cofomf (mgr.14193) 143 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:11.348 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:11 vm06 bash[17497]: cluster 2026-03-10T12:46:09.951058+0000 mgr.vm06.cofomf (mgr.14193) 143 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:11.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:11 vm09 bash[21409]: cluster 2026-03-10T12:46:09.951058+0000 mgr.vm06.cofomf (mgr.14193) 143 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:11.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:11 vm09 bash[21409]: cluster 2026-03-10T12:46:09.951058+0000 mgr.vm06.cofomf (mgr.14193) 143 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:12.527 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:46:12.623 INFO:teuthology.orchestra.run.vm06.stdout:167 167 2026-03-10T12:46:12.666 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- bash -c 'ceph orch status' 2026-03-10T12:46:13.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:13 vm06 bash[17497]: cluster 2026-03-10T12:46:11.951344+0000 mgr.vm06.cofomf (mgr.14193) 144 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:13.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:13 vm06 bash[17497]: cluster 2026-03-10T12:46:11.951344+0000 mgr.vm06.cofomf (mgr.14193) 144 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:13.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:13 vm09 bash[21409]: cluster 2026-03-10T12:46:11.951344+0000 mgr.vm06.cofomf (mgr.14193) 144 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:13.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:13 vm09 bash[21409]: cluster 2026-03-10T12:46:11.951344+0000 mgr.vm06.cofomf (mgr.14193) 144 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:15 vm06 bash[17497]: cluster 2026-03-10T12:46:13.951633+0000 mgr.vm06.cofomf (mgr.14193) 145 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:15.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:15 vm06 bash[17497]: cluster 2026-03-10T12:46:13.951633+0000 mgr.vm06.cofomf (mgr.14193) 145 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:15 vm09 bash[21409]: cluster 2026-03-10T12:46:13.951633+0000 mgr.vm06.cofomf (mgr.14193) 145 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:15.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:15 vm09 bash[21409]: cluster 2026-03-10T12:46:13.951633+0000 mgr.vm06.cofomf (mgr.14193) 145 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:16.558 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:46:16.813 INFO:teuthology.orchestra.run.vm06.stdout:Backend: cephadm 2026-03-10T12:46:16.813 INFO:teuthology.orchestra.run.vm06.stdout:Available: Yes 2026-03-10T12:46:16.813 INFO:teuthology.orchestra.run.vm06.stdout:Paused: No 2026-03-10T12:46:16.867 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- bash -c 'ceph orch ps' 2026-03-10T12:46:17.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:17 vm06 bash[17497]: cluster 2026-03-10T12:46:15.951936+0000 mgr.vm06.cofomf (mgr.14193) 146 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:17.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:17 vm06 bash[17497]: cluster 2026-03-10T12:46:15.951936+0000 mgr.vm06.cofomf (mgr.14193) 146 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:17.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:17 vm06 bash[17497]: audit 2026-03-10T12:46:16.991670+0000 mon.vm06 (mon.0) 636 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:46:17.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:17 vm06 bash[17497]: audit 2026-03-10T12:46:16.991670+0000 mon.vm06 (mon.0) 636 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:46:17.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:17 vm09 bash[21409]: cluster 2026-03-10T12:46:15.951936+0000 mgr.vm06.cofomf (mgr.14193) 146 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:17.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:17 vm09 bash[21409]: cluster 2026-03-10T12:46:15.951936+0000 mgr.vm06.cofomf (mgr.14193) 146 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:17.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:17 vm09 bash[21409]: audit 2026-03-10T12:46:16.991670+0000 mon.vm06 (mon.0) 636 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:46:17.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:17 vm09 bash[21409]: audit 2026-03-10T12:46:16.991670+0000 mon.vm06 (mon.0) 636 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:46:18.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:18 vm09 bash[21409]: audit 2026-03-10T12:46:16.813722+0000 mgr.vm06.cofomf (mgr.14193) 147 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:18.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:18 vm09 bash[21409]: audit 2026-03-10T12:46:16.813722+0000 mgr.vm06.cofomf (mgr.14193) 147 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:18.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:18 vm06 bash[17497]: audit 2026-03-10T12:46:16.813722+0000 mgr.vm06.cofomf (mgr.14193) 147 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:18.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:18 vm06 bash[17497]: audit 2026-03-10T12:46:16.813722+0000 mgr.vm06.cofomf (mgr.14193) 147 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:19 vm09 bash[21409]: cluster 2026-03-10T12:46:17.952270+0000 mgr.vm06.cofomf (mgr.14193) 148 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:19.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:19 vm09 bash[21409]: cluster 2026-03-10T12:46:17.952270+0000 mgr.vm06.cofomf (mgr.14193) 148 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:19.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:19 vm06 bash[17497]: cluster 2026-03-10T12:46:17.952270+0000 mgr.vm06.cofomf (mgr.14193) 148 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:19.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:19 vm06 bash[17497]: cluster 2026-03-10T12:46:17.952270+0000 mgr.vm06.cofomf (mgr.14193) 148 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:20.592 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:alertmanager.vm06 vm06 *:9093,9094 running (2m) 16s ago 3m 14.3M - 0.25.0 c8568f914cd2 d108de01b171 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:ceph-exporter.vm06 vm06 *:9926 running (3m) 16s ago 3m 8884k - 19.2.3-678-ge911bdeb 654f31e6858e d4f326ac6f19 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:ceph-exporter.vm09 vm09 *:9926 running (2m) 16s ago 2m 6276k - 19.2.3-678-ge911bdeb 654f31e6858e dbda4e85d017 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:crash.vm06 vm06 running (3m) 16s ago 3m 7300k - 19.2.3-678-ge911bdeb 654f31e6858e d63cc854a00b 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:crash.vm09 vm09 running (2m) 16s ago 2m 7304k - 19.2.3-678-ge911bdeb 654f31e6858e 67bd643ad13e 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:grafana.vm06 vm06 *:3000 running (2m) 16s ago 2m 63.3M - 10.4.0 c8b91775d855 27184972028c 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:mgr.vm06.cofomf vm06 *:9283,8765,8443 running (3m) 16s ago 3m 526M - 19.2.3-678-ge911bdeb 654f31e6858e 30170d412316 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:mgr.vm09.mcduck vm09 *:8443,9283,8765 running (2m) 16s ago 2m 464M - 19.2.3-678-ge911bdeb 654f31e6858e 5701589d930f 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:mon.vm06 vm06 running (3m) 16s ago 3m 47.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e f21fdbe2b119 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:mon.vm09 vm09 running (2m) 16s ago 2m 40.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e1bfd103d923 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:node-exporter.vm06 vm06 *:9100 running (3m) 16s ago 3m 7604k - 1.7.0 72c9c2088986 4593d067933a 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:node-exporter.vm09 vm09 *:9100 running (2m) 16s ago 2m 7587k - 1.7.0 72c9c2088986 806bcb363bb7 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:osd.0 vm09 running (102s) 16s ago 104s 37.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8a04d22c3763 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:osd.1 vm06 running (101s) 16s ago 103s 57.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c9313553e715 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:osd.2 vm06 running (100s) 16s ago 102s 58.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 6aa71f1ea13a 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:osd.3 vm09 running (100s) 16s ago 102s 35.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 98b31acc45db 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:osd.4 vm09 running (99s) 16s ago 101s 56.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7f0abb2a9dc8 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:osd.5 vm06 running (98s) 16s ago 100s 56.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c9777115d415 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:osd.6 vm09 running (98s) 16s ago 99s 34.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 627e24a2751a 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:osd.7 vm06 running (97s) 16s ago 98s 38.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c2a7280fb0de 2026-03-10T12:46:20.857 INFO:teuthology.orchestra.run.vm06.stdout:prometheus.vm06 vm06 *:9095 running (2m) 16s ago 2m 34.3M - 2.51.0 1d3b7f56885b b305ca4c61b2 2026-03-10T12:46:20.915 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- bash -c 'ceph orch ls' 2026-03-10T12:46:21.210 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:21 vm06 bash[17497]: cluster 2026-03-10T12:46:19.952612+0000 mgr.vm06.cofomf (mgr.14193) 149 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:21.210 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:21 vm06 bash[17497]: cluster 2026-03-10T12:46:19.952612+0000 mgr.vm06.cofomf (mgr.14193) 149 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:21.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:21 vm09 bash[21409]: cluster 2026-03-10T12:46:19.952612+0000 mgr.vm06.cofomf (mgr.14193) 149 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:21.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:21 vm09 bash[21409]: cluster 2026-03-10T12:46:19.952612+0000 mgr.vm06.cofomf (mgr.14193) 149 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:22.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:22 vm06 bash[17497]: audit 2026-03-10T12:46:20.853131+0000 mgr.vm06.cofomf (mgr.14193) 150 : audit [DBG] from='client.14458 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:22.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:22 vm06 bash[17497]: audit 2026-03-10T12:46:20.853131+0000 mgr.vm06.cofomf (mgr.14193) 150 : audit [DBG] from='client.14458 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:22.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:22 vm09 bash[21409]: audit 2026-03-10T12:46:20.853131+0000 mgr.vm06.cofomf (mgr.14193) 150 : audit [DBG] from='client.14458 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:22.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:22 vm09 bash[21409]: audit 2026-03-10T12:46:20.853131+0000 mgr.vm06.cofomf (mgr.14193) 150 : audit [DBG] from='client.14458 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:23.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:23 vm06 bash[17497]: cluster 2026-03-10T12:46:21.952844+0000 mgr.vm06.cofomf (mgr.14193) 151 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:23.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:23 vm06 bash[17497]: cluster 2026-03-10T12:46:21.952844+0000 mgr.vm06.cofomf (mgr.14193) 151 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:23.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:23 vm09 bash[21409]: cluster 2026-03-10T12:46:21.952844+0000 mgr.vm06.cofomf (mgr.14193) 151 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:23.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:23 vm09 bash[21409]: cluster 2026-03-10T12:46:21.952844+0000 mgr.vm06.cofomf (mgr.14193) 151 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:24.623 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:46:24.871 INFO:teuthology.orchestra.run.vm06.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T12:46:24.871 INFO:teuthology.orchestra.run.vm06.stdout:alertmanager ?:9093,9094 1/1 20s ago 3m count:1 2026-03-10T12:46:24.871 INFO:teuthology.orchestra.run.vm06.stdout:ceph-exporter ?:9926 2/2 20s ago 3m * 2026-03-10T12:46:24.871 INFO:teuthology.orchestra.run.vm06.stdout:crash 2/2 20s ago 3m * 2026-03-10T12:46:24.871 INFO:teuthology.orchestra.run.vm06.stdout:grafana ?:3000 1/1 20s ago 3m count:1 2026-03-10T12:46:24.871 INFO:teuthology.orchestra.run.vm06.stdout:mgr 2/2 20s ago 3m count:2 2026-03-10T12:46:24.872 INFO:teuthology.orchestra.run.vm06.stdout:mon 2/2 20s ago 2m vm06:192.168.123.106=vm06;vm09:192.168.123.109=vm09;count:2 2026-03-10T12:46:24.872 INFO:teuthology.orchestra.run.vm06.stdout:node-exporter ?:9100 2/2 20s ago 3m * 2026-03-10T12:46:24.872 INFO:teuthology.orchestra.run.vm06.stdout:osd.all-available-devices 8 20s ago 2m * 2026-03-10T12:46:24.872 INFO:teuthology.orchestra.run.vm06.stdout:prometheus ?:9095 1/1 20s ago 3m count:1 2026-03-10T12:46:24.922 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- bash -c 'ceph orch host ls' 2026-03-10T12:46:25.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:25 vm06 bash[17497]: cluster 2026-03-10T12:46:23.953099+0000 mgr.vm06.cofomf (mgr.14193) 152 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:25.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:25 vm06 bash[17497]: cluster 2026-03-10T12:46:23.953099+0000 mgr.vm06.cofomf (mgr.14193) 152 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:25.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:25 vm09 bash[21409]: cluster 2026-03-10T12:46:23.953099+0000 mgr.vm06.cofomf (mgr.14193) 152 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:25.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:25 vm09 bash[21409]: cluster 2026-03-10T12:46:23.953099+0000 mgr.vm06.cofomf (mgr.14193) 152 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:26.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:26 vm06 bash[17497]: audit 2026-03-10T12:46:24.870113+0000 mgr.vm06.cofomf (mgr.14193) 153 : audit [DBG] from='client.14462 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:26.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:26 vm06 bash[17497]: audit 2026-03-10T12:46:24.870113+0000 mgr.vm06.cofomf (mgr.14193) 153 : audit [DBG] from='client.14462 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:26.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:26 vm09 bash[21409]: audit 2026-03-10T12:46:24.870113+0000 mgr.vm06.cofomf (mgr.14193) 153 : audit [DBG] from='client.14462 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:26.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:26 vm09 bash[21409]: audit 2026-03-10T12:46:24.870113+0000 mgr.vm06.cofomf (mgr.14193) 153 : audit [DBG] from='client.14462 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:27.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:27 vm06 bash[17497]: cluster 2026-03-10T12:46:25.953391+0000 mgr.vm06.cofomf (mgr.14193) 154 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:27.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:27 vm06 bash[17497]: cluster 2026-03-10T12:46:25.953391+0000 mgr.vm06.cofomf (mgr.14193) 154 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:27 vm09 bash[21409]: cluster 2026-03-10T12:46:25.953391+0000 mgr.vm06.cofomf (mgr.14193) 154 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:27.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:27 vm09 bash[21409]: cluster 2026-03-10T12:46:25.953391+0000 mgr.vm06.cofomf (mgr.14193) 154 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:28.660 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:46:28.903 INFO:teuthology.orchestra.run.vm06.stdout:HOST ADDR LABELS STATUS 2026-03-10T12:46:28.903 INFO:teuthology.orchestra.run.vm06.stdout:vm06 192.168.123.106 2026-03-10T12:46:28.903 INFO:teuthology.orchestra.run.vm06.stdout:vm09 192.168.123.109 2026-03-10T12:46:28.903 INFO:teuthology.orchestra.run.vm06.stdout:2 hosts in cluster 2026-03-10T12:46:28.955 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- bash -c 'ceph orch device ls' 2026-03-10T12:46:29.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:29 vm06 bash[17497]: cluster 2026-03-10T12:46:27.953648+0000 mgr.vm06.cofomf (mgr.14193) 155 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:29.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:29 vm06 bash[17497]: cluster 2026-03-10T12:46:27.953648+0000 mgr.vm06.cofomf (mgr.14193) 155 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:29.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:29 vm09 bash[21409]: cluster 2026-03-10T12:46:27.953648+0000 mgr.vm06.cofomf (mgr.14193) 155 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:29.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:29 vm09 bash[21409]: cluster 2026-03-10T12:46:27.953648+0000 mgr.vm06.cofomf (mgr.14193) 155 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:30.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:30 vm09 bash[21409]: audit 2026-03-10T12:46:28.903663+0000 mgr.vm06.cofomf (mgr.14193) 156 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:30.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:30 vm09 bash[21409]: audit 2026-03-10T12:46:28.903663+0000 mgr.vm06.cofomf (mgr.14193) 156 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:30.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:30 vm06 bash[17497]: audit 2026-03-10T12:46:28.903663+0000 mgr.vm06.cofomf (mgr.14193) 156 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:30.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:30 vm06 bash[17497]: audit 2026-03-10T12:46:28.903663+0000 mgr.vm06.cofomf (mgr.14193) 156 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:31.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:31 vm06 bash[17497]: cluster 2026-03-10T12:46:29.953946+0000 mgr.vm06.cofomf (mgr.14193) 157 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:31.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:31 vm06 bash[17497]: cluster 2026-03-10T12:46:29.953946+0000 mgr.vm06.cofomf (mgr.14193) 157 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:31.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:31 vm09 bash[21409]: cluster 2026-03-10T12:46:29.953946+0000 mgr.vm06.cofomf (mgr.14193) 157 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:31.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:31 vm09 bash[21409]: cluster 2026-03-10T12:46:29.953946+0000 mgr.vm06.cofomf (mgr.14193) 157 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:32.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:32 vm06 bash[17497]: audit 2026-03-10T12:46:31.991790+0000 mon.vm06 (mon.0) 637 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:46:32.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:32 vm06 bash[17497]: audit 2026-03-10T12:46:31.991790+0000 mon.vm06 (mon.0) 637 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:46:32.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:32 vm09 bash[21409]: audit 2026-03-10T12:46:31.991790+0000 mon.vm06 (mon.0) 637 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:46:32.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:32 vm09 bash[21409]: audit 2026-03-10T12:46:31.991790+0000 mon.vm06 (mon.0) 637 : audit [DBG] from='mgr.14193 192.168.123.106:0/2199235980' entity='mgr.vm06.cofomf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:46:32.692 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:46:32.940 INFO:teuthology.orchestra.run.vm06.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T12:46:32.940 INFO:teuthology.orchestra.run.vm06.stdout:vm06 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 93s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T12:46:32.940 INFO:teuthology.orchestra.run.vm06.stdout:vm06 /dev/vdb hdd DWNBRSTVMM06001 20.0G No 93s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:32.940 INFO:teuthology.orchestra.run.vm06.stdout:vm06 /dev/vdc hdd DWNBRSTVMM06002 20.0G No 93s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:32.940 INFO:teuthology.orchestra.run.vm06.stdout:vm06 /dev/vdd hdd DWNBRSTVMM06003 20.0G No 93s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:32.940 INFO:teuthology.orchestra.run.vm06.stdout:vm06 /dev/vde hdd DWNBRSTVMM06004 20.0G No 93s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:32.940 INFO:teuthology.orchestra.run.vm06.stdout:vm09 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 93s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T12:46:32.940 INFO:teuthology.orchestra.run.vm06.stdout:vm09 /dev/vdb hdd DWNBRSTVMM09001 20.0G No 93s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:32.940 INFO:teuthology.orchestra.run.vm06.stdout:vm09 /dev/vdc hdd DWNBRSTVMM09002 20.0G No 93s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:32.940 INFO:teuthology.orchestra.run.vm06.stdout:vm09 /dev/vdd hdd DWNBRSTVMM09003 20.0G No 93s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:32.940 INFO:teuthology.orchestra.run.vm06.stdout:vm09 /dev/vde hdd DWNBRSTVMM09004 20.0G No 93s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:46:32.994 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- bash -c 'ceph orch ls | grep '"'"'^osd.all-available-devices '"'"'' 2026-03-10T12:46:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:33 vm06 bash[17497]: cluster 2026-03-10T12:46:31.954235+0000 mgr.vm06.cofomf (mgr.14193) 158 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:33.347 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:33 vm06 bash[17497]: cluster 2026-03-10T12:46:31.954235+0000 mgr.vm06.cofomf (mgr.14193) 158 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:33.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:33 vm09 bash[21409]: cluster 2026-03-10T12:46:31.954235+0000 mgr.vm06.cofomf (mgr.14193) 158 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:33.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:33 vm09 bash[21409]: cluster 2026-03-10T12:46:31.954235+0000 mgr.vm06.cofomf (mgr.14193) 158 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:34.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:34 vm06 bash[17497]: audit 2026-03-10T12:46:32.940022+0000 mgr.vm06.cofomf (mgr.14193) 159 : audit [DBG] from='client.14470 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:34.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:34 vm06 bash[17497]: audit 2026-03-10T12:46:32.940022+0000 mgr.vm06.cofomf (mgr.14193) 159 : audit [DBG] from='client.14470 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:34.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:34 vm09 bash[21409]: audit 2026-03-10T12:46:32.940022+0000 mgr.vm06.cofomf (mgr.14193) 159 : audit [DBG] from='client.14470 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:34.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:34 vm09 bash[21409]: audit 2026-03-10T12:46:32.940022+0000 mgr.vm06.cofomf (mgr.14193) 159 : audit [DBG] from='client.14470 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:35.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:35 vm09 bash[21409]: cluster 2026-03-10T12:46:33.954594+0000 mgr.vm06.cofomf (mgr.14193) 160 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:35.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:35 vm09 bash[21409]: cluster 2026-03-10T12:46:33.954594+0000 mgr.vm06.cofomf (mgr.14193) 160 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:35.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:35 vm06 bash[17497]: cluster 2026-03-10T12:46:33.954594+0000 mgr.vm06.cofomf (mgr.14193) 160 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:35.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:35 vm06 bash[17497]: cluster 2026-03-10T12:46:33.954594+0000 mgr.vm06.cofomf (mgr.14193) 160 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:36.729 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:46:37.009 INFO:teuthology.orchestra.run.vm06.stdout:osd.all-available-devices 8 33s ago 2m * 2026-03-10T12:46:37.053 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T12:46:37.055 INFO:tasks.cephadm:Teardown begin 2026-03-10T12:46:37.055 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:46:37.062 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:46:37.071 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T12:46:37.071 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 -- ceph mgr module disable cephadm 2026-03-10T12:46:37.302 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:37 vm06 bash[17497]: cluster 2026-03-10T12:46:35.954931+0000 mgr.vm06.cofomf (mgr.14193) 161 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:37.302 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:37 vm06 bash[17497]: cluster 2026-03-10T12:46:35.954931+0000 mgr.vm06.cofomf (mgr.14193) 161 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:37 vm09 bash[21409]: cluster 2026-03-10T12:46:35.954931+0000 mgr.vm06.cofomf (mgr.14193) 161 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:37.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:37 vm09 bash[21409]: cluster 2026-03-10T12:46:35.954931+0000 mgr.vm06.cofomf (mgr.14193) 161 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:38.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:38 vm06 bash[17497]: audit 2026-03-10T12:46:36.998239+0000 mgr.vm06.cofomf (mgr.14193) 162 : audit [DBG] from='client.14474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:38.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:38 vm06 bash[17497]: audit 2026-03-10T12:46:36.998239+0000 mgr.vm06.cofomf (mgr.14193) 162 : audit [DBG] from='client.14474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:38.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:38 vm09 bash[21409]: audit 2026-03-10T12:46:36.998239+0000 mgr.vm06.cofomf (mgr.14193) 162 : audit [DBG] from='client.14474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:38.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:38 vm09 bash[21409]: audit 2026-03-10T12:46:36.998239+0000 mgr.vm06.cofomf (mgr.14193) 162 : audit [DBG] from='client.14474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:46:39.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:39 vm06 bash[17497]: cluster 2026-03-10T12:46:37.955214+0000 mgr.vm06.cofomf (mgr.14193) 163 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:39.597 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:39 vm06 bash[17497]: cluster 2026-03-10T12:46:37.955214+0000 mgr.vm06.cofomf (mgr.14193) 163 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:39.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:39 vm09 bash[21409]: cluster 2026-03-10T12:46:37.955214+0000 mgr.vm06.cofomf (mgr.14193) 163 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:39.609 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:46:39 vm09 bash[21409]: cluster 2026-03-10T12:46:37.955214+0000 mgr.vm06.cofomf (mgr.14193) 163 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:46:40.763 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/mon.vm06/config 2026-03-10T12:46:40.909 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-10T12:46:40.905+0000 7f085f667640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T12:46:40.909 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-10T12:46:40.905+0000 7f085f667640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T12:46:40.909 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-10T12:46:40.905+0000 7f085f667640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T12:46:40.909 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-10T12:46:40.905+0000 7f085f667640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T12:46:40.909 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-10T12:46:40.905+0000 7f085f667640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T12:46:40.909 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-10T12:46:40.905+0000 7f085f667640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T12:46:40.909 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-10T12:46:40.905+0000 7f085f667640 -1 monclient: keyring not found 2026-03-10T12:46:40.909 INFO:teuthology.orchestra.run.vm06.stderr:[errno 21] error connecting to the cluster 2026-03-10T12:46:40.948 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:46:40.948 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T12:46:40.948 DEBUG:teuthology.orchestra.run.vm06:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T12:46:40.951 DEBUG:teuthology.orchestra.run.vm09:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T12:46:40.953 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T12:46:40.954 INFO:tasks.cephadm.mon.vm06:Stopping mon.vm06... 2026-03-10T12:46:40.954 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mon.vm06 2026-03-10T12:46:41.036 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:41 vm06 systemd[1]: Stopping Ceph mon.vm06 for 68e2be40-1c7e-11f1-b779-df2955349a39... 2026-03-10T12:46:41.201 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:41 vm06 bash[17497]: debug 2026-03-10T12:46:41.033+0000 7f33db6b4640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.vm06 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T12:46:41.201 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:41 vm06 bash[17497]: debug 2026-03-10T12:46:41.033+0000 7f33db6b4640 -1 mon.vm06@0(leader) e2 *** Got Signal Terminated *** 2026-03-10T12:46:41.297 INFO:journalctl@ceph.mon.vm06.vm06.stdout:Mar 10 12:46:41 vm06 bash[47380]: ceph-68e2be40-1c7e-11f1-b779-df2955349a39-mon-vm06 2026-03-10T12:46:41.299 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mon.vm06.service' 2026-03-10T12:46:41.370 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T12:46:41.370 INFO:tasks.cephadm.mon.vm06:Stopped mon.vm06 2026-03-10T12:46:41.370 INFO:tasks.cephadm.mon.vm09:Stopping mon.vm09... 2026-03-10T12:46:41.370 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mon.vm09 2026-03-10T12:46:41.516 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-68e2be40-1c7e-11f1-b779-df2955349a39@mon.vm09.service' 2026-03-10T12:46:41.536 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T12:46:41.536 INFO:tasks.cephadm.mon.vm09:Stopped mon.vm09 2026-03-10T12:46:41.537 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 --force --keep-logs 2026-03-10T12:46:41.624 INFO:teuthology.orchestra.run.vm06.stdout:Deleting cluster with fsid: 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:47:11.428 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 --force --keep-logs 2026-03-10T12:47:11.513 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:47:40.502 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:47:40.509 INFO:teuthology.orchestra.run.vm06.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-10T12:47:40.509 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:47:40.509 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:47:40.517 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T12:47:40.517 DEBUG:teuthology.misc:Transferring archived files from vm06:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1033/remote/vm06/crash 2026-03-10T12:47:40.517 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/crash -- . 2026-03-10T12:47:40.557 INFO:teuthology.orchestra.run.vm06.stderr:tar: /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/crash: Cannot open: No such file or directory 2026-03-10T12:47:40.557 INFO:teuthology.orchestra.run.vm06.stderr:tar: Error is not recoverable: exiting now 2026-03-10T12:47:40.558 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1033/remote/vm09/crash 2026-03-10T12:47:40.558 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/crash -- . 2026-03-10T12:47:40.567 INFO:teuthology.orchestra.run.vm09.stderr:tar: /var/lib/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/crash: Cannot open: No such file or directory 2026-03-10T12:47:40.567 INFO:teuthology.orchestra.run.vm09.stderr:tar: Error is not recoverable: exiting now 2026-03-10T12:47:40.567 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T12:47:40.567 DEBUG:teuthology.orchestra.run.vm06:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_DAEMON_PLACE_FAIL | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-10T12:47:40.609 INFO:tasks.cephadm:Compressing logs... 2026-03-10T12:47:40.609 DEBUG:teuthology.orchestra.run.vm06:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T12:47:40.651 DEBUG:teuthology.orchestra.run.vm09:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T12:47:40.658 INFO:teuthology.orchestra.run.vm09.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T12:47:40.658 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T12:47:40.658 INFO:teuthology.orchestra.run.vm06.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T12:47:40.659 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T12:47:40.659 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.3.log 2026-03-10T12:47:40.659 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-client.ceph-exporter.vm06.log 2026-03-10T12:47:40.659 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.log 2026-03-10T12:47:40.659 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.log 2026-03-10T12:47:40.660 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-client.ceph-exporter.vm06.log: 93.7% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-client.ceph-exporter.vm06.log.gz 2026-03-10T12:47:40.660 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.1.log 2026-03-10T12:47:40.660 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.3.log: 89.5% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T12:47:40.660 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-mgr.vm09.mcduck.log 2026-03-10T12:47:40.661 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.log: 86.3% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.log.gz 2026-03-10T12:47:40.661 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.log: 86.2% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.log.gz 2026-03-10T12:47:40.661 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.6.log 2026-03-10T12:47:40.662 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.5.log 2026-03-10T12:47:40.664 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-mgr.vm09.mcduck.log: 91.5% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-mgr.vm09.mcduck.log.gz 2026-03-10T12:47:40.664 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.audit.log 2026-03-10T12:47:40.671 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.7.log 2026-03-10T12:47:40.672 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.5.log: 91.6% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T12:47:40.672 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-volume.log 2026-03-10T12:47:40.673 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.audit.log: 90.8% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.audit.log.gz 2026-03-10T12:47:40.678 INFO:teuthology.orchestra.run.vm09.stderr: 93.0% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.3.log.gz 2026-03-10T12:47:40.679 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-mgr.vm06.cofomf.log 2026-03-10T12:47:40.679 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-mon.vm09.log 2026-03-10T12:47:40.682 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.7.log: 92.9% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.1.log.gz 2026-03-10T12:47:40.683 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.2.log 2026-03-10T12:47:40.685 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-client.ceph-exporter.vm09.log 2026-03-10T12:47:40.686 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-mon.vm09.log: 96.2% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-volume.log.gz 2026-03-10T12:47:40.686 INFO:teuthology.orchestra.run.vm09.stderr: 93.1% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.6.log.gz 2026-03-10T12:47:40.687 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.cephadm.log 2026-03-10T12:47:40.687 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-client.ceph-exporter.vm09.log: 29.4% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-client.ceph-exporter.vm09.log.gz 2026-03-10T12:47:40.687 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.4.log 2026-03-10T12:47:40.688 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.cephadm.log: 82.0% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.cephadm.log.gz 2026-03-10T12:47:40.688 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.0.log 2026-03-10T12:47:40.690 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-mgr.vm06.cofomf.log: gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.audit.log 2026-03-10T12:47:40.699 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-volume.log 2026-03-10T12:47:40.701 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.audit.log: 90.7% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.audit.log.gz 2026-03-10T12:47:40.702 INFO:teuthology.orchestra.run.vm06.stderr: 93.1% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.5.log.gz 2026-03-10T12:47:40.711 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.cephadm.log 2026-03-10T12:47:40.711 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.4.log: /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.0.log: 92.6% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-mon.vm09.log.gz 2026-03-10T12:47:40.714 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-mon.vm06.log 2026-03-10T12:47:40.715 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.cephadm.log: 83.1% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph.cephadm.log.gz 2026-03-10T12:47:40.719 INFO:teuthology.orchestra.run.vm09.stderr: 93.1% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.4.log.gz 2026-03-10T12:47:40.722 INFO:teuthology.orchestra.run.vm09.stderr: 92.9% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.0.log.gz 2026-03-10T12:47:40.723 INFO:teuthology.orchestra.run.vm06.stderr: 93.0% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.2.log.gz 2026-03-10T12:47:40.723 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-10T12:47:40.723 INFO:teuthology.orchestra.run.vm09.stderr:real 0m0.069s 2026-03-10T12:47:40.723 INFO:teuthology.orchestra.run.vm09.stderr:user 0m0.124s 2026-03-10T12:47:40.723 INFO:teuthology.orchestra.run.vm09.stderr:sys 0m0.007s 2026-03-10T12:47:40.730 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-mon.vm06.log: 96.3% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-volume.log.gz 2026-03-10T12:47:40.732 INFO:teuthology.orchestra.run.vm06.stderr: 93.2% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-osd.7.log.gz 2026-03-10T12:47:40.756 INFO:teuthology.orchestra.run.vm06.stderr: 90.5% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-mgr.vm06.cofomf.log.gz 2026-03-10T12:47:40.818 INFO:teuthology.orchestra.run.vm06.stderr: 91.4% -- replaced with /var/log/ceph/68e2be40-1c7e-11f1-b779-df2955349a39/ceph-mon.vm06.log.gz 2026-03-10T12:47:40.819 INFO:teuthology.orchestra.run.vm06.stderr: 2026-03-10T12:47:40.819 INFO:teuthology.orchestra.run.vm06.stderr:real 0m0.166s 2026-03-10T12:47:40.819 INFO:teuthology.orchestra.run.vm06.stderr:user 0m0.241s 2026-03-10T12:47:40.819 INFO:teuthology.orchestra.run.vm06.stderr:sys 0m0.020s 2026-03-10T12:47:40.819 INFO:tasks.cephadm:Archiving logs... 2026-03-10T12:47:40.819 DEBUG:teuthology.misc:Transferring archived files from vm06:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1033/remote/vm06/log 2026-03-10T12:47:40.819 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T12:47:40.884 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1033/remote/vm09/log 2026-03-10T12:47:40.884 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T12:47:40.900 INFO:tasks.cephadm:Removing cluster... 2026-03-10T12:47:40.900 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 --force 2026-03-10T12:47:41.011 INFO:teuthology.orchestra.run.vm06.stdout:Deleting cluster with fsid: 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:47:42.077 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 68e2be40-1c7e-11f1-b779-df2955349a39 --force 2026-03-10T12:47:42.168 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: 68e2be40-1c7e-11f1-b779-df2955349a39 2026-03-10T12:47:43.234 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T12:47:43.235 DEBUG:teuthology.orchestra.run.vm06:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T12:47:43.238 DEBUG:teuthology.orchestra.run.vm09:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T12:47:43.241 INFO:tasks.cephadm:Teardown complete 2026-03-10T12:47:43.241 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T12:47:43.243 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T12:47:43.243 DEBUG:teuthology.orchestra.run.vm06:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T12:47:43.280 DEBUG:teuthology.orchestra.run.vm09:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:============================================================================== 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+static.179.181. 161.62.157.173 3 u 55 64 77 23.650 +0.235 6.902 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+47.ip-51-75-67. 225.254.30.190 4 u 54 64 77 21.260 +1.004 6.628 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+185.252.140.126 218.73.139.35 2 u 59 64 77 25.234 -0.073 5.939 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+vps-fra2.orlean 169.254.169.254 4 u 53 64 77 20.983 +0.004 4.967 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:*ntp0.rrze.uni-e .GPS. 1 u 54 64 77 26.238 -1.162 8.217 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+185.232.69.65 ( .PHC0. 1 u 53 64 77 28.248 -2.326 5.193 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+basilisk.mybb.d 171.237.1.87 2 u 56 64 77 28.302 -2.522 4.606 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+ntp2.kernfusion 192.53.103.108 2 u 51 64 77 32.700 -0.914 4.479 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+static.46.170.2 188.40.142.18 3 u 48 64 77 25.094 -0.004 5.560 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+zeus.f5s.de 192.53.103.103 2 u 53 64 77 24.997 +0.030 5.285 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+mx03.fischl-onl 122.227.206.195 3 u 51 64 77 25.055 +0.219 5.228 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+alphyn.canonica 132.163.96.1 2 u 58 64 77 98.676 -0.092 5.188 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+ns.gunnarhofman 237.17.204.95 2 u 47 64 77 24.886 +0.087 4.541 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+ntp2.uni-ulm.de 129.69.253.1 2 u 48 64 77 27.604 -0.883 4.921 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+185.125.190.56 79.243.60.50 2 u 57 64 77 35.322 -1.429 4.718 2026-03-10T12:47:43.449 INFO:teuthology.orchestra.run.vm06.stdout:+139-162-187-236 82.43.52.28 2 u 47 64 77 22.630 -5.312 4.406 2026-03-10T12:47:43.450 INFO:teuthology.orchestra.run.vm06.stdout:+185.125.190.58 145.238.80.80 2 u 63 64 77 35.421 -2.002 4.992 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:============================================================================== 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:+mx03.fischl-onl 122.227.206.195 3 u 49 64 77 24.993 +0.428 2.896 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:+zeus.f5s.de 192.53.103.103 2 u 50 64 77 25.025 +4.762 3.678 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:+ntp2.kernfusion 192.53.103.108 2 u 50 64 77 31.519 +0.523 2.142 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:+vps-fra2.orlean 169.254.169.254 4 u 51 64 77 20.959 +0.610 3.058 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:+static.46.170.2 188.40.142.18 3 u 49 64 77 24.971 +0.253 2.856 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:*ntp0.rrze.uni-e .GPS. 1 u 51 64 77 26.219 -0.589 3.194 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:+47.ip-51-75-67. 225.254.30.190 4 u 52 64 77 21.185 +1.570 2.717 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:#139-162-187-236 82.43.52.28 2 u 49 64 77 22.783 -1.471 3.105 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:+basilisk.mybb.d 171.237.1.87 2 u 47 64 77 28.324 -1.876 1.075 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:#185.125.190.56 79.243.60.50 2 u 59 64 77 33.283 +1.481 1.733 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:+ns.gunnarhofman 237.17.204.95 2 u 52 64 77 24.968 +0.537 2.414 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:+ntp2.uni-ulm.de 129.69.253.1 2 u 55 64 77 27.398 -0.409 2.404 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:#185.125.190.57 194.121.207.249 2 u 62 64 77 32.018 +0.910 2.102 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:#185.232.69.65 ( .PHC0. 1 u 51 64 77 28.340 -1.762 2.563 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:#alphyn.canonica 132.163.96.1 2 u 58 64 77 102.153 -2.501 2.145 2026-03-10T12:47:43.452 INFO:teuthology.orchestra.run.vm09.stdout:#185.125.190.58 145.238.80.80 2 u 59 64 77 35.312 -1.368 2.391 2026-03-10T12:47:43.453 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T12:47:43.455 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T12:47:43.455 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T12:47:43.457 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T12:47:43.459 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T12:47:43.461 INFO:teuthology.task.internal:Duration was 534.955458 seconds 2026-03-10T12:47:43.461 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T12:47:43.463 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T12:47:43.463 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T12:47:43.464 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T12:47:43.492 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T12:47:43.492 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm06.local 2026-03-10T12:47:43.492 DEBUG:teuthology.orchestra.run.vm06:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T12:47:43.546 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm09.local 2026-03-10T12:47:43.546 DEBUG:teuthology.orchestra.run.vm09:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T12:47:43.556 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T12:47:43.557 DEBUG:teuthology.orchestra.run.vm06:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T12:47:43.591 DEBUG:teuthology.orchestra.run.vm09:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T12:47:43.638 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T12:47:43.638 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T12:47:43.638 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T12:47:43.644 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T12:47:43.645 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T12:47:43.645 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: gzip -5 --verbose -- 0.0% /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T12:47:43.645 INFO:teuthology.orchestra.run.vm06.stderr: -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T12:47:43.645 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T12:47:43.656 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 89.3% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T12:47:43.683 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T12:47:43.683 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T12:47:43.683 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T12:47:43.683 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T12:47:43.684 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz/home/ubuntu/cephtest/archive/syslog/journalctl.log: 2026-03-10T12:47:43.690 INFO:teuthology.orchestra.run.vm09.stderr: 89.4% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T12:47:43.692 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T12:47:43.694 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T12:47:43.694 DEBUG:teuthology.orchestra.run.vm06:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T12:47:43.706 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T12:47:43.743 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T12:47:43.745 DEBUG:teuthology.orchestra.run.vm06:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T12:47:43.747 DEBUG:teuthology.orchestra.run.vm09:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T12:47:43.753 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = core 2026-03-10T12:47:43.791 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = core 2026-03-10T12:47:43.799 DEBUG:teuthology.orchestra.run.vm06:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T12:47:43.804 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:47:43.805 DEBUG:teuthology.orchestra.run.vm09:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T12:47:43.842 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:47:43.843 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T12:47:43.845 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T12:47:43.845 DEBUG:teuthology.misc:Transferring archived files from vm06:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1033/remote/vm06 2026-03-10T12:47:43.845 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T12:47:43.854 DEBUG:teuthology.misc:Transferring archived files from vm09:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1033/remote/vm09 2026-03-10T12:47:43.854 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T12:47:43.891 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T12:47:43.892 DEBUG:teuthology.orchestra.run.vm06:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T12:47:43.899 DEBUG:teuthology.orchestra.run.vm09:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T12:47:43.939 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T12:47:43.941 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T12:47:43.941 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T12:47:43.944 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T12:47:43.944 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T12:47:43.945 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T12:47:43.947 INFO:teuthology.orchestra.run.vm06.stdout: 258067 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 12:47 /home/ubuntu/cephtest 2026-03-10T12:47:43.983 INFO:teuthology.orchestra.run.vm09.stdout: 258079 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 12:47 /home/ubuntu/cephtest 2026-03-10T12:47:43.983 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T12:47:43.989 INFO:teuthology.run:Summary data: description: orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 1-start 2-services/basic 3-final} duration: 534.9554579257965 owner: kyr success: true 2026-03-10T12:47:43.989 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T12:47:44.007 INFO:teuthology.run:pass