2026-03-10T07:13:11.257 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T07:13:11.262 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T07:13:11.282 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/944 branch: squid description: orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 1-start 2-services/nfs-ingress 3-final} email: null first_in_suite: false flavor: default job_id: '944' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_DAEMON_PLACE_FAIL - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - client.0 - - host.b - client.1 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm05.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNCt6ix73YvFUpdBxeiZghPqhx8Qzn6eMI5CE4M8rGS5m7Sh32/tapJrdNopDU6YeY1ag+lCaBrNZRQE3zEmy5I= vm09.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOxrIc3EdR1yNdpewwtb4mOuhMfOck3gVDoFjNcgbaDA4fkNVqzgjZQpigxIRE6ze/63eD31pSpaGIW/j2rtUF8= tasks: - cephadm: roleless: true - cephadm.shell: host.a: - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - vip: null - cephadm.shell: host.a: - ceph orch device ls --refresh - vip.exec: all-hosts: - systemctl stop nfs-server - cephadm.shell: host.a: - ceph fs volume create foofs - cephadm.apply: specs: - placement: count: 2 service_id: foo service_type: nfs spec: port: 12049 - service_id: nfs.foo service_type: ingress spec: backend_service: nfs.foo frontend_port: 2049 monitor_port: 9002 virtual_ip: '{{VIP0}}/{{VIPPREFIXLEN}}' - cephadm.wait_for_service: service: nfs.foo - cephadm.wait_for_service: service: ingress.nfs.foo - cephadm.shell: host.a: - ceph nfs export create cephfs --fsname foofs --cluster-id foo --pseudo-path /fake - vip.exec: host.a: - mkdir /mnt/foo - sleep 5 - mount -t nfs {{VIP0}}:/fake /mnt/foo - echo test > /mnt/foo/testfile - sync - cephadm.shell: host.a: - "echo \"Check with each haproxy down in turn...\"\nfor haproxy in `ceph orch\ \ ps | grep ^haproxy.nfs.foo. | awk '{print $1}'`; do\n ceph orch daemon stop\ \ $haproxy\n while ! ceph orch ps | grep $haproxy | grep stopped; do sleep\ \ 1 ; done\n cat /mnt/foo/testfile\n echo $haproxy > /mnt/foo/testfile\n \ \ sync\n ceph orch daemon start $haproxy\n while ! ceph orch ps | grep $haproxy\ \ | grep running; do sleep 1 ; done\ndone\n" volumes: - /mnt/foo:/mnt/foo - cephadm.shell: host.a: - stat -c '%u %g' /var/log/ceph | grep '167 167' - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - ceph orch ls | grep '^osd.all-available-devices ' teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T07:13:11.282 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T07:13:11.283 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T07:13:11.283 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T07:13:11.283 INFO:teuthology.task.internal:Checking packages... 2026-03-10T07:13:11.283 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T07:13:11.283 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T07:13:11.283 INFO:teuthology.packaging:ref: None 2026-03-10T07:13:11.283 INFO:teuthology.packaging:tag: None 2026-03-10T07:13:11.283 INFO:teuthology.packaging:branch: squid 2026-03-10T07:13:11.283 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:13:11.283 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-10T07:13:12.006 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-10T07:13:12.007 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T07:13:12.008 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T07:13:12.008 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T07:13:12.008 INFO:teuthology.task.internal:Saving configuration 2026-03-10T07:13:12.013 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T07:13:12.014 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T07:13:12.021 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm05.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/944', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 07:11:56.154918', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:05', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNCt6ix73YvFUpdBxeiZghPqhx8Qzn6eMI5CE4M8rGS5m7Sh32/tapJrdNopDU6YeY1ag+lCaBrNZRQE3zEmy5I='} 2026-03-10T07:13:12.026 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm09.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/944', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 07:11:56.154472', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:09', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOxrIc3EdR1yNdpewwtb4mOuhMfOck3gVDoFjNcgbaDA4fkNVqzgjZQpigxIRE6ze/63eD31pSpaGIW/j2rtUF8='} 2026-03-10T07:13:12.026 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T07:13:12.027 INFO:teuthology.task.internal:roles: ubuntu@vm05.local - ['host.a', 'client.0'] 2026-03-10T07:13:12.027 INFO:teuthology.task.internal:roles: ubuntu@vm09.local - ['host.b', 'client.1'] 2026-03-10T07:13:12.027 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T07:13:12.033 DEBUG:teuthology.task.console_log:vm05 does not support IPMI; excluding 2026-03-10T07:13:12.038 DEBUG:teuthology.task.console_log:vm09 does not support IPMI; excluding 2026-03-10T07:13:12.038 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fe692372290>, signals=[15]) 2026-03-10T07:13:12.038 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T07:13:12.039 INFO:teuthology.task.internal:Opening connections... 2026-03-10T07:13:12.039 DEBUG:teuthology.task.internal:connecting to ubuntu@vm05.local 2026-03-10T07:13:12.040 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T07:13:12.100 DEBUG:teuthology.task.internal:connecting to ubuntu@vm09.local 2026-03-10T07:13:12.100 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T07:13:12.162 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T07:13:12.163 DEBUG:teuthology.orchestra.run.vm05:> uname -m 2026-03-10T07:13:12.170 INFO:teuthology.orchestra.run.vm05.stdout:x86_64 2026-03-10T07:13:12.170 DEBUG:teuthology.orchestra.run.vm05:> cat /etc/os-release 2026-03-10T07:13:12.213 INFO:teuthology.orchestra.run.vm05.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T07:13:12.213 INFO:teuthology.orchestra.run.vm05.stdout:NAME="Ubuntu" 2026-03-10T07:13:12.213 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_ID="22.04" 2026-03-10T07:13:12.213 INFO:teuthology.orchestra.run.vm05.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T07:13:12.213 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_CODENAME=jammy 2026-03-10T07:13:12.213 INFO:teuthology.orchestra.run.vm05.stdout:ID=ubuntu 2026-03-10T07:13:12.213 INFO:teuthology.orchestra.run.vm05.stdout:ID_LIKE=debian 2026-03-10T07:13:12.213 INFO:teuthology.orchestra.run.vm05.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T07:13:12.213 INFO:teuthology.orchestra.run.vm05.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T07:13:12.213 INFO:teuthology.orchestra.run.vm05.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T07:13:12.213 INFO:teuthology.orchestra.run.vm05.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T07:13:12.213 INFO:teuthology.orchestra.run.vm05.stdout:UBUNTU_CODENAME=jammy 2026-03-10T07:13:12.214 INFO:teuthology.lock.ops:Updating vm05.local on lock server 2026-03-10T07:13:12.219 DEBUG:teuthology.orchestra.run.vm09:> uname -m 2026-03-10T07:13:12.228 INFO:teuthology.orchestra.run.vm09.stdout:x86_64 2026-03-10T07:13:12.228 DEBUG:teuthology.orchestra.run.vm09:> cat /etc/os-release 2026-03-10T07:13:12.272 INFO:teuthology.orchestra.run.vm09.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T07:13:12.272 INFO:teuthology.orchestra.run.vm09.stdout:NAME="Ubuntu" 2026-03-10T07:13:12.272 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_ID="22.04" 2026-03-10T07:13:12.272 INFO:teuthology.orchestra.run.vm09.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T07:13:12.272 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_CODENAME=jammy 2026-03-10T07:13:12.272 INFO:teuthology.orchestra.run.vm09.stdout:ID=ubuntu 2026-03-10T07:13:12.272 INFO:teuthology.orchestra.run.vm09.stdout:ID_LIKE=debian 2026-03-10T07:13:12.272 INFO:teuthology.orchestra.run.vm09.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T07:13:12.272 INFO:teuthology.orchestra.run.vm09.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T07:13:12.272 INFO:teuthology.orchestra.run.vm09.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T07:13:12.272 INFO:teuthology.orchestra.run.vm09.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T07:13:12.272 INFO:teuthology.orchestra.run.vm09.stdout:UBUNTU_CODENAME=jammy 2026-03-10T07:13:12.272 INFO:teuthology.lock.ops:Updating vm09.local on lock server 2026-03-10T07:13:12.277 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T07:13:12.278 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T07:13:12.279 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T07:13:12.279 DEBUG:teuthology.orchestra.run.vm05:> test '!' -e /home/ubuntu/cephtest 2026-03-10T07:13:12.280 DEBUG:teuthology.orchestra.run.vm09:> test '!' -e /home/ubuntu/cephtest 2026-03-10T07:13:12.315 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T07:13:12.316 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T07:13:12.316 DEBUG:teuthology.orchestra.run.vm05:> test -z $(ls -A /var/lib/ceph) 2026-03-10T07:13:12.323 DEBUG:teuthology.orchestra.run.vm09:> test -z $(ls -A /var/lib/ceph) 2026-03-10T07:13:12.325 INFO:teuthology.orchestra.run.vm05.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T07:13:12.360 INFO:teuthology.orchestra.run.vm09.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T07:13:12.360 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T07:13:12.368 DEBUG:teuthology.orchestra.run.vm05:> test -e /ceph-qa-ready 2026-03-10T07:13:12.370 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:13:12.638 DEBUG:teuthology.orchestra.run.vm09:> test -e /ceph-qa-ready 2026-03-10T07:13:12.641 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:13:12.867 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T07:13:12.869 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T07:13:12.869 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T07:13:12.870 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T07:13:12.872 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T07:13:12.873 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T07:13:12.875 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T07:13:12.875 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T07:13:12.915 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T07:13:12.920 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T07:13:12.921 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T07:13:12.921 DEBUG:teuthology.orchestra.run.vm05:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T07:13:12.962 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:13:12.962 DEBUG:teuthology.orchestra.run.vm09:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T07:13:12.964 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:13:12.964 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T07:13:13.004 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T07:13:13.011 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T07:13:13.016 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T07:13:13.017 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T07:13:13.021 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T07:13:13.022 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T07:13:13.047 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T07:13:13.047 DEBUG:teuthology.orchestra.run.vm05:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T07:13:13.059 DEBUG:teuthology.orchestra.run.vm09:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T07:13:13.071 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T07:13:13.074 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T07:13:13.074 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T07:13:13.107 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T07:13:13.115 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T07:13:13.153 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T07:13:13.197 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T07:13:13.197 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T07:13:13.249 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T07:13:13.262 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T07:13:13.308 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T07:13:13.308 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T07:13:13.356 DEBUG:teuthology.orchestra.run.vm05:> sudo service rsyslog restart 2026-03-10T07:13:13.358 DEBUG:teuthology.orchestra.run.vm09:> sudo service rsyslog restart 2026-03-10T07:13:13.412 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T07:13:13.419 INFO:teuthology.task.internal:Starting timer... 2026-03-10T07:13:13.420 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T07:13:13.436 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T07:13:13.490 INFO:teuthology.task.selinux:Excluding vm05: VMs are not yet supported 2026-03-10T07:13:13.490 INFO:teuthology.task.selinux:Excluding vm09: VMs are not yet supported 2026-03-10T07:13:13.490 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T07:13:13.490 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T07:13:13.490 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T07:13:13.490 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T07:13:13.530 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T07:13:13.531 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T07:13:13.586 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T07:13:14.369 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T07:13:14.416 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T07:13:14.416 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryrga6kyzp --limit vm05.local,vm09.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T07:15:30.357 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm05.local'), Remote(name='ubuntu@vm09.local')] 2026-03-10T07:15:30.357 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm05.local' 2026-03-10T07:15:30.358 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T07:15:30.419 DEBUG:teuthology.orchestra.run.vm05:> true 2026-03-10T07:15:30.641 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm05.local' 2026-03-10T07:15:30.641 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm09.local' 2026-03-10T07:15:30.641 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T07:15:30.702 DEBUG:teuthology.orchestra.run.vm09:> true 2026-03-10T07:15:30.924 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm09.local' 2026-03-10T07:15:30.924 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T07:15:30.927 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T07:15:30.927 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T07:15:30.927 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T07:15:30.928 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T07:15:30.928 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: Command line: ntpd -gq 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: ---------------------------------------------------- 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: corporation. Support and training for ntp-4 are 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: available at https://www.nwtime.org/support 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: ---------------------------------------------------- 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: proto: precision = 0.029 usec (-25) 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: basedate set to 2022-02-04 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: gps base set to 2022-02-06 (week 2196) 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: Listen normally on 3 ens3 192.168.123.105:123 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: Listen normally on 4 lo [::1]:123 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:5%2]:123 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:30 ntpd[16131]: Listening on routing socket on fd #22 for interface updates 2026-03-10T07:15:30.946 INFO:teuthology.orchestra.run.vm05.stderr:10 Mar 07:15:30 ntpd[16131]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: Command line: ntpd -gq 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: ---------------------------------------------------- 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: corporation. Support and training for ntp-4 are 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: available at https://www.nwtime.org/support 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: ---------------------------------------------------- 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: proto: precision = 0.040 usec (-24) 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: basedate set to 2022-02-04 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: gps base set to 2022-02-06 (week 2196) 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T07:15:30.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T07:15:30.983 INFO:teuthology.orchestra.run.vm09.stderr:10 Mar 07:15:30 ntpd[16101]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T07:15:30.983 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T07:15:30.983 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T07:15:30.983 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T07:15:30.984 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: Listen normally on 3 ens3 192.168.123.109:123 2026-03-10T07:15:30.984 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: Listen normally on 4 lo [::1]:123 2026-03-10T07:15:30.984 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:9%2]:123 2026-03-10T07:15:30.984 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:30 ntpd[16101]: Listening on routing socket on fd #22 for interface updates 2026-03-10T07:15:31.945 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:31 ntpd[16131]: Soliciting pool server 144.76.59.106 2026-03-10T07:15:31.983 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:31 ntpd[16101]: Soliciting pool server 144.76.59.106 2026-03-10T07:15:32.944 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:32 ntpd[16131]: Soliciting pool server 77.90.0.148 2026-03-10T07:15:32.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:32 ntpd[16101]: Soliciting pool server 77.90.0.148 2026-03-10T07:15:33.072 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:33 ntpd[16101]: Soliciting pool server 162.159.200.1 2026-03-10T07:15:33.072 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:33 ntpd[16131]: Soliciting pool server 162.159.200.1 2026-03-10T07:15:33.943 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:33 ntpd[16131]: Soliciting pool server 213.172.105.106 2026-03-10T07:15:33.944 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:33 ntpd[16131]: Soliciting pool server 147.189.175.171 2026-03-10T07:15:33.983 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:33 ntpd[16101]: Soliciting pool server 213.172.105.106 2026-03-10T07:15:33.983 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:33 ntpd[16101]: Soliciting pool server 147.189.175.171 2026-03-10T07:15:34.274 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:34 ntpd[16131]: Soliciting pool server 202.61.195.221 2026-03-10T07:15:34.274 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:34 ntpd[16101]: Soliciting pool server 202.61.195.221 2026-03-10T07:15:34.943 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:34 ntpd[16131]: Soliciting pool server 144.76.66.156 2026-03-10T07:15:34.943 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:34 ntpd[16131]: Soliciting pool server 81.169.217.236 2026-03-10T07:15:34.943 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:34 ntpd[16131]: Soliciting pool server 152.53.15.80 2026-03-10T07:15:34.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:34 ntpd[16101]: Soliciting pool server 144.76.66.156 2026-03-10T07:15:34.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:34 ntpd[16101]: Soliciting pool server 81.169.217.236 2026-03-10T07:15:34.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:34 ntpd[16101]: Soliciting pool server 152.53.15.80 2026-03-10T07:15:35.154 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:35 ntpd[16101]: Soliciting pool server 178.63.52.50 2026-03-10T07:15:35.154 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:35 ntpd[16131]: Soliciting pool server 178.63.52.50 2026-03-10T07:15:35.943 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:35 ntpd[16131]: Soliciting pool server 46.224.156.215 2026-03-10T07:15:35.943 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:35 ntpd[16131]: Soliciting pool server 131.188.3.221 2026-03-10T07:15:35.943 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:35 ntpd[16131]: Soliciting pool server 212.18.3.19 2026-03-10T07:15:35.943 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:35 ntpd[16131]: Soliciting pool server 185.125.190.57 2026-03-10T07:15:35.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:35 ntpd[16101]: Soliciting pool server 46.224.156.215 2026-03-10T07:15:35.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:35 ntpd[16101]: Soliciting pool server 131.188.3.221 2026-03-10T07:15:35.983 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:35 ntpd[16101]: Soliciting pool server 212.18.3.19 2026-03-10T07:15:35.983 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:35 ntpd[16101]: Soliciting pool server 185.125.190.57 2026-03-10T07:15:36.942 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:36 ntpd[16131]: Soliciting pool server 185.125.190.58 2026-03-10T07:15:36.943 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:36 ntpd[16131]: Soliciting pool server 185.13.148.71 2026-03-10T07:15:36.943 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:36 ntpd[16131]: Soliciting pool server 129.70.132.36 2026-03-10T07:15:36.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:36 ntpd[16101]: Soliciting pool server 185.125.190.58 2026-03-10T07:15:36.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:36 ntpd[16101]: Soliciting pool server 185.13.148.71 2026-03-10T07:15:36.982 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:36 ntpd[16101]: Soliciting pool server 129.70.132.36 2026-03-10T07:15:38.967 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 07:15:38 ntpd[16131]: ntpd: time slew +0.011929 s 2026-03-10T07:15:38.967 INFO:teuthology.orchestra.run.vm05.stdout:ntpd: time slew +0.011929s 2026-03-10T07:15:38.991 INFO:teuthology.orchestra.run.vm05.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T07:15:38.991 INFO:teuthology.orchestra.run.vm05.stdout:============================================================================== 2026-03-10T07:15:38.991 INFO:teuthology.orchestra.run.vm05.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:15:38.991 INFO:teuthology.orchestra.run.vm05.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:15:38.991 INFO:teuthology.orchestra.run.vm05.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:15:38.991 INFO:teuthology.orchestra.run.vm05.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:15:38.991 INFO:teuthology.orchestra.run.vm05.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:15:39.007 INFO:teuthology.orchestra.run.vm09.stdout:10 Mar 07:15:39 ntpd[16101]: ntpd: time slew +0.000441 s 2026-03-10T07:15:39.007 INFO:teuthology.orchestra.run.vm09.stdout:ntpd: time slew +0.000441s 2026-03-10T07:15:39.030 INFO:teuthology.orchestra.run.vm09.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T07:15:39.030 INFO:teuthology.orchestra.run.vm09.stdout:============================================================================== 2026-03-10T07:15:39.030 INFO:teuthology.orchestra.run.vm09.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:15:39.030 INFO:teuthology.orchestra.run.vm09.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:15:39.030 INFO:teuthology.orchestra.run.vm09.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:15:39.030 INFO:teuthology.orchestra.run.vm09.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:15:39.030 INFO:teuthology.orchestra.run.vm09.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:15:39.030 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T07:15:39.077 INFO:tasks.cephadm:Config: {'roleless': True, 'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_DAEMON_PLACE_FAIL', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T07:15:39.077 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:15:39.077 INFO:tasks.cephadm:Cluster fsid is f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:15:39.077 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T07:15:39.077 INFO:tasks.cephadm:No mon roles; fabricating mons 2026-03-10T07:15:39.077 INFO:tasks.cephadm:Monitor IPs: {'mon.vm05': '192.168.123.105', 'mon.vm09': '192.168.123.109'} 2026-03-10T07:15:39.077 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T07:15:39.077 DEBUG:teuthology.orchestra.run.vm05:> sudo hostname $(hostname -s) 2026-03-10T07:15:39.086 DEBUG:teuthology.orchestra.run.vm09:> sudo hostname $(hostname -s) 2026-03-10T07:15:39.102 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T07:15:39.102 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:15:39.660 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T07:15:40.343 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:15:40.344 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-10T07:15:40.344 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-10T07:15:40.344 DEBUG:teuthology.orchestra.run.vm05:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T07:15:41.705 INFO:teuthology.orchestra.run.vm05.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 07:15 /home/ubuntu/cephtest/cephadm 2026-03-10T07:15:41.705 DEBUG:teuthology.orchestra.run.vm09:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T07:15:43.123 INFO:teuthology.orchestra.run.vm09.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 07:15 /home/ubuntu/cephtest/cephadm 2026-03-10T07:15:43.124 DEBUG:teuthology.orchestra.run.vm05:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T07:15:43.128 DEBUG:teuthology.orchestra.run.vm09:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T07:15:43.136 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T07:15:43.136 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T07:15:43.172 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T07:15:43.264 INFO:teuthology.orchestra.run.vm05.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T07:15:43.273 INFO:teuthology.orchestra.run.vm09.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T07:16:25.010 INFO:teuthology.orchestra.run.vm09.stdout:{ 2026-03-10T07:16:25.010 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T07:16:25.010 INFO:teuthology.orchestra.run.vm09.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T07:16:25.010 INFO:teuthology.orchestra.run.vm09.stdout: "repo_digests": [ 2026-03-10T07:16:25.010 INFO:teuthology.orchestra.run.vm09.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T07:16:25.010 INFO:teuthology.orchestra.run.vm09.stdout: ] 2026-03-10T07:16:25.010 INFO:teuthology.orchestra.run.vm09.stdout:} 2026-03-10T07:16:36.455 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T07:16:36.455 INFO:teuthology.orchestra.run.vm05.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T07:16:36.455 INFO:teuthology.orchestra.run.vm05.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T07:16:36.455 INFO:teuthology.orchestra.run.vm05.stdout: "repo_digests": [ 2026-03-10T07:16:36.455 INFO:teuthology.orchestra.run.vm05.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T07:16:36.455 INFO:teuthology.orchestra.run.vm05.stdout: ] 2026-03-10T07:16:36.455 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T07:16:36.478 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /etc/ceph 2026-03-10T07:16:36.489 DEBUG:teuthology.orchestra.run.vm09:> sudo mkdir -p /etc/ceph 2026-03-10T07:16:36.496 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 777 /etc/ceph 2026-03-10T07:16:36.540 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 777 /etc/ceph 2026-03-10T07:16:36.546 INFO:tasks.cephadm:Writing seed config... 2026-03-10T07:16:36.546 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T07:16:36.546 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T07:16:36.546 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T07:16:36.546 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T07:16:36.546 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T07:16:36.546 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T07:16:36.546 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T07:16:36.546 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T07:16:36.546 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-10T07:16:36.547 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T07:16:36.547 DEBUG:teuthology.orchestra.run.vm05:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T07:16:36.584 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = f0f57d3c-1c50-11f1-837e-f755e850132e [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T07:16:36.584 DEBUG:teuthology.orchestra.run.vm05:mon.vm05> sudo journalctl -f -n 0 -u ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mon.vm05.service 2026-03-10T07:16:36.626 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T07:16:36.626 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid f0f57d3c-1c50-11f1-837e-f755e850132e --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 192.168.123.105 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:16:36.763 INFO:teuthology.orchestra.run.vm05.stdout:-------------------------------------------------------------------------------- 2026-03-10T07:16:36.763 INFO:teuthology.orchestra.run.vm05.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'f0f57d3c-1c50-11f1-837e-f755e850132e', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-ip', '192.168.123.105', '--skip-admin-label'] 2026-03-10T07:16:36.763 INFO:teuthology.orchestra.run.vm05.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T07:16:36.764 INFO:teuthology.orchestra.run.vm05.stdout:Verifying podman|docker is present... 2026-03-10T07:16:36.764 INFO:teuthology.orchestra.run.vm05.stdout:Verifying lvm2 is present... 2026-03-10T07:16:36.764 INFO:teuthology.orchestra.run.vm05.stdout:Verifying time synchronization is in place... 2026-03-10T07:16:36.767 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T07:16:36.767 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T07:16:36.770 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T07:16:36.770 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-10T07:16:36.772 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T07:16:36.772 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T07:16:36.775 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T07:16:36.775 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-10T07:16:36.777 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T07:16:36.777 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout masked 2026-03-10T07:16:36.780 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T07:16:36.780 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-10T07:16:36.783 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T07:16:36.783 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T07:16:36.786 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T07:16:36.786 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-10T07:16:36.788 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout enabled 2026-03-10T07:16:36.791 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout active 2026-03-10T07:16:36.791 INFO:teuthology.orchestra.run.vm05.stdout:Unit ntp.service is enabled and running 2026-03-10T07:16:36.791 INFO:teuthology.orchestra.run.vm05.stdout:Repeating the final host check... 2026-03-10T07:16:36.791 INFO:teuthology.orchestra.run.vm05.stdout:docker (/usr/bin/docker) is present 2026-03-10T07:16:36.791 INFO:teuthology.orchestra.run.vm05.stdout:systemctl is present 2026-03-10T07:16:36.791 INFO:teuthology.orchestra.run.vm05.stdout:lvcreate is present 2026-03-10T07:16:36.793 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T07:16:36.793 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T07:16:36.795 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T07:16:36.795 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-10T07:16:36.797 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T07:16:36.797 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T07:16:36.799 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T07:16:36.799 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-10T07:16:36.801 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T07:16:36.801 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout masked 2026-03-10T07:16:36.803 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T07:16:36.803 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-10T07:16:36.807 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T07:16:36.807 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T07:16:36.810 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T07:16:36.810 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-10T07:16:36.813 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout enabled 2026-03-10T07:16:36.815 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout active 2026-03-10T07:16:36.815 INFO:teuthology.orchestra.run.vm05.stdout:Unit ntp.service is enabled and running 2026-03-10T07:16:36.816 INFO:teuthology.orchestra.run.vm05.stdout:Host looks OK 2026-03-10T07:16:36.816 INFO:teuthology.orchestra.run.vm05.stdout:Cluster fsid: f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:16:36.816 INFO:teuthology.orchestra.run.vm05.stdout:Acquiring lock 140002216167312 on /run/cephadm/f0f57d3c-1c50-11f1-837e-f755e850132e.lock 2026-03-10T07:16:36.816 INFO:teuthology.orchestra.run.vm05.stdout:Lock 140002216167312 acquired on /run/cephadm/f0f57d3c-1c50-11f1-837e-f755e850132e.lock 2026-03-10T07:16:36.816 INFO:teuthology.orchestra.run.vm05.stdout:Verifying IP 192.168.123.105 port 3300 ... 2026-03-10T07:16:36.816 INFO:teuthology.orchestra.run.vm05.stdout:Verifying IP 192.168.123.105 port 6789 ... 2026-03-10T07:16:36.816 INFO:teuthology.orchestra.run.vm05.stdout:Base mon IP(s) is [192.168.123.105:3300, 192.168.123.105:6789], mon addrv is [v2:192.168.123.105:3300,v1:192.168.123.105:6789] 2026-03-10T07:16:36.818 INFO:teuthology.orchestra.run.vm05.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.105 metric 100 2026-03-10T07:16:36.818 INFO:teuthology.orchestra.run.vm05.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-10T07:16:36.818 INFO:teuthology.orchestra.run.vm05.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.105 metric 100 2026-03-10T07:16:36.818 INFO:teuthology.orchestra.run.vm05.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.105 metric 100 2026-03-10T07:16:36.819 INFO:teuthology.orchestra.run.vm05.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T07:16:36.819 INFO:teuthology.orchestra.run.vm05.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-10T07:16:36.820 INFO:teuthology.orchestra.run.vm05.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T07:16:36.820 INFO:teuthology.orchestra.run.vm05.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T07:16:36.820 INFO:teuthology.orchestra.run.vm05.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T07:16:36.820 INFO:teuthology.orchestra.run.vm05.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-10T07:16:36.820 INFO:teuthology.orchestra.run.vm05.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:5/64 scope link 2026-03-10T07:16:36.820 INFO:teuthology.orchestra.run.vm05.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T07:16:36.821 INFO:teuthology.orchestra.run.vm05.stdout:Mon IP `192.168.123.105` is in CIDR network `192.168.123.0/24` 2026-03-10T07:16:36.821 INFO:teuthology.orchestra.run.vm05.stdout:Mon IP `192.168.123.105` is in CIDR network `192.168.123.0/24` 2026-03-10T07:16:36.821 INFO:teuthology.orchestra.run.vm05.stdout:Mon IP `192.168.123.105` is in CIDR network `192.168.123.1/32` 2026-03-10T07:16:36.821 INFO:teuthology.orchestra.run.vm05.stdout:Mon IP `192.168.123.105` is in CIDR network `192.168.123.1/32` 2026-03-10T07:16:36.821 INFO:teuthology.orchestra.run.vm05.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-10T07:16:36.821 INFO:teuthology.orchestra.run.vm05.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T07:16:36.821 INFO:teuthology.orchestra.run.vm05.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T07:16:37.857 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-10T07:16:37.857 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T07:16:37.858 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:16:37.858 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:16:38.021 INFO:teuthology.orchestra.run.vm05.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T07:16:38.021 INFO:teuthology.orchestra.run.vm05.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T07:16:38.021 INFO:teuthology.orchestra.run.vm05.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T07:16:38.184 INFO:teuthology.orchestra.run.vm05.stdout:stat: stdout 167 167 2026-03-10T07:16:38.184 INFO:teuthology.orchestra.run.vm05.stdout:Creating initial keys... 2026-03-10T07:16:38.303 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-authtool: stdout AQBWxa9pDAMeEBAAVpnlog+oGRr+bQYQTc0GIQ== 2026-03-10T07:16:38.427 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-authtool: stdout AQBWxa9pRXSJFxAAt9ewtnViEGcpBWSs32X+Rw== 2026-03-10T07:16:38.605 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-authtool: stdout AQBWxa9pByZQIhAAQCoxdro91ma0WzCa/GVMtg== 2026-03-10T07:16:38.605 INFO:teuthology.orchestra.run.vm05.stdout:Creating initial monmap... 2026-03-10T07:16:38.728 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T07:16:38.728 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T07:16:38.728 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:16:38.728 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T07:16:38.728 INFO:teuthology.orchestra.run.vm05.stdout:monmaptool for vm05 [v2:192.168.123.105:3300,v1:192.168.123.105:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T07:16:38.728 INFO:teuthology.orchestra.run.vm05.stdout:setting min_mon_release = quincy 2026-03-10T07:16:38.728 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: set fsid to f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:16:38.728 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T07:16:38.728 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:16:38.728 INFO:teuthology.orchestra.run.vm05.stdout:Creating mon... 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.816+0000 7fc6d88c5d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.816+0000 7fc6d88c5d80 1 imported monmap: 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr fsid f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-10T07:16:38.694276+0000 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr created 2026-03-10T07:16:38.694276+0000 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.816+0000 7fc6d88c5d80 0 /usr/bin/ceph-mon: set fsid to f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Git sha 0 2026-03-10T07:16:38.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: DB SUMMARY 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: DB Session ID: E779X5G4NQYS5TGXTH8U 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-vm05/store.db dir, Total Num: 0, files: 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm05/store.db: 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.create_if_missing: 1 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.env: 0x560dd0ee6dc0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.info_log: 0x560e110dae60 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.db_log_dir: 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.wal_dir: 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T07:16:38.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.write_buffer_manager: 0x560e110d15e0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.row_cache: None 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.wal_filter: None 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T07:16:38.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Compression algorithms supported: 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: kZSTD supported: 0 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T07:16:38.871 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T07:16:38.872 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T07:16:38.872 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T07:16:38.872 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T07:16:38.872 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.820+0000 7fc6d88c5d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-10T07:16:38.872 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.872 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm05/store.db/MANIFEST-000001 2026-03-10T07:16:38.872 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.872 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T07:16:38.872 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.872 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.merge_operator: 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560e110cd580) 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x560e110f3350 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-10T07:16:38.873 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.num_levels: 7 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T07:16:38.874 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T07:16:38.877 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm05/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 905381dd-f3ff-41c5-8402-bbfa3dc72292 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.824+0000 7fc6d88c5d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.828+0000 7fc6d88c5d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560e110f4e00 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.828+0000 7fc6d88c5d80 4 rocksdb: DB pointer 0x560e111d8000 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.828+0000 7fc6d004f640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T07:16:38.878 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.828+0000 7fc6d004f640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x560e110f3350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.1e-05 secs_since: 0 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.828+0000 7fc6d88c5d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.828+0000 7fc6d88c5d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T07:16:38.832+0000 7fc6d88c5d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-vm05 for mon.vm05 2026-03-10T07:16:38.879 INFO:teuthology.orchestra.run.vm05.stdout:create mon.vm05 on 2026-03-10T07:16:39.186 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T07:16:39.355 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e.target → /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e.target. 2026-03-10T07:16:39.356 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e.target → /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e.target. 2026-03-10T07:16:39.557 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mon.vm05 2026-03-10T07:16:39.557 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to reset failed state of unit ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mon.vm05.service: Unit ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mon.vm05.service not loaded. 2026-03-10T07:16:39.733 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e.target.wants/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mon.vm05.service → /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service. 2026-03-10T07:16:39.743 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-10T07:16:39.743 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T07:16:39.743 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mon to start... 2026-03-10T07:16:39.743 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mon... 2026-03-10T07:16:40.132 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:39 vm05 bash[17045]: cluster 2026-03-10T07:16:39.902350+0000 mon.vm05 (mon.0) 1 : cluster [INF] mon.vm05 is new leader, mons vm05 in quorum (ranks 0) 2026-03-10T07:16:40.163 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T07:16:40.163 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout id: f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:16:40.163 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T07:16:40.163 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T07:16:40.163 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout services: 2026-03-10T07:16:40.163 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum vm05 (age 0.218335s) 2026-03-10T07:16:40.163 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T07:16:40.163 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T07:16:40.163 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T07:16:40.164 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout data: 2026-03-10T07:16:40.164 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T07:16:40.164 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T07:16:40.164 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T07:16:40.164 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T07:16:40.164 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T07:16:40.164 INFO:teuthology.orchestra.run.vm05.stdout:mon is available 2026-03-10T07:16:40.164 INFO:teuthology.orchestra.run.vm05.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T07:16:40.405 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout fsid = f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.105:3300,v1:192.168.123.105:6789] 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T07:16:40.406 INFO:teuthology.orchestra.run.vm05.stdout:Generating new minimal ceph.conf... 2026-03-10T07:16:40.595 INFO:teuthology.orchestra.run.vm05.stdout:Restarting the monitor... 2026-03-10T07:16:40.695 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 systemd[1]: Stopping Ceph mon.vm05 for f0f57d3c-1c50-11f1-837e-f755e850132e... 2026-03-10T07:16:40.696 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17045]: debug 2026-03-10T07:16:40.632+0000 7fae1efd2640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.vm05 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T07:16:40.696 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17045]: debug 2026-03-10T07:16:40.632+0000 7fae1efd2640 -1 mon.vm05@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T07:16:40.754 INFO:teuthology.orchestra.run.vm05.stdout:Setting public_network to 192.168.123.0/24,192.168.123.1/32 in mon config section 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17434]: ceph-f0f57d3c-1c50-11f1-837e-f755e850132e-mon-vm05 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 systemd[1]: ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mon.vm05.service: Deactivated successfully. 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 systemd[1]: Stopped Ceph mon.vm05 for f0f57d3c-1c50-11f1-837e-f755e850132e. 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 systemd[1]: Started Ceph mon.vm05 for f0f57d3c-1c50-11f1-837e-f755e850132e. 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 0 load: jerasure load: lrc 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Git sha 0 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: DB SUMMARY 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: DB Session ID: FVEWY0IBKIQJ1ANHZMD2 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T07:16:40.951 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-vm05/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm05/store.db: 000009.log size: 75071 ; 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.env: 0x55c14519ddc0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.info_log: 0x55c1577c2de0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.db_log_dir: 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.wal_dir: 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.write_buffer_manager: 0x55c1577c7900 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T07:16:40.952 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.row_cache: None 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.wal_filter: None 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Compression algorithms supported: 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: kZSTD supported: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm05/store.db/MANIFEST-000010 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.merge_operator: 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T07:16:40.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c1577c25c0) 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cache_index_and_filter_blocks: 1 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: pin_top_level_index_and_filter: 1 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: index_type: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: data_block_index_type: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: index_shortening: 1 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: checksum: 4 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: no_block_cache: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: block_cache: 0x55c1577e9350 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: block_cache_name: BinnedLRUCache 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: block_cache_options: 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: capacity : 536870912 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: num_shard_bits : 4 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: strict_capacity_limit : 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: high_pri_pool_ratio: 0.000 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: block_cache_compressed: (nil) 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: persistent_cache: (nil) 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: block_size: 4096 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: block_size_deviation: 10 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: block_restart_interval: 16 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: index_block_restart_interval: 1 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: metadata_block_size: 4096 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: partition_filters: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: use_delta_encoding: 1 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: filter_policy: bloomfilter 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: whole_key_filtering: 1 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: verify_compression: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: read_amp_bytes_per_bit: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: format_version: 5 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: enable_index_compression: 1 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: block_align: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: max_auto_readahead_size: 262144 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: prepopulate_block_cache: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: initial_auto_readahead_size: 8192 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: num_file_reads_for_auto_readahead: 2 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.num_levels: 7 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:16:40.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T07:16:40.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.880+0000 7fdc10277d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.884+0000 7fdc10277d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm05/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.884+0000 7fdc10277d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.884+0000 7fdc10277d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 905381dd-f3ff-41c5-8402-bbfa3dc72292 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.884+0000 7fdc10277d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773127000887295, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.884+0000 7fdc10277d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.884+0000 7fdc10277d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773127000888778, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72139, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 223, "table_properties": {"data_size": 70418, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9562, "raw_average_key_size": 49, "raw_value_size": 65043, "raw_average_value_size": 335, "num_data_blocks": 8, "num_entries": 194, "num_filter_entries": 194, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773127000, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "905381dd-f3ff-41c5-8402-bbfa3dc72292", "db_session_id": "FVEWY0IBKIQJ1ANHZMD2", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.884+0000 7fdc10277d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773127000888838, "job": 1, "event": "recovery_finished"} 2026-03-10T07:16:40.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: debug 2026-03-10T07:16:40.884+0000 7fdc10277d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T07:16:41.013 INFO:teuthology.orchestra.run.vm05.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T07:16:41.014 INFO:teuthology.orchestra.run.vm05.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:16:41.014 INFO:teuthology.orchestra.run.vm05.stdout:Creating mgr... 2026-03-10T07:16:41.014 INFO:teuthology.orchestra.run.vm05.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T07:16:41.014 INFO:teuthology.orchestra.run.vm05.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T07:16:41.014 INFO:teuthology.orchestra.run.vm05.stdout:Verifying port 0.0.0.0:8443 ... 2026-03-10T07:16:41.196 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mgr.vm05.wnsmpp 2026-03-10T07:16:41.196 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to reset failed state of unit ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mgr.vm05.wnsmpp.service: Unit ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mgr.vm05.wnsmpp.service not loaded. 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900202+0000 mon.vm05 (mon.0) 1 : cluster [INF] mon.vm05 is new leader, mons vm05 in quorum (ranks 0) 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900202+0000 mon.vm05 (mon.0) 1 : cluster [INF] mon.vm05 is new leader, mons vm05 in quorum (ranks 0) 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900247+0000 mon.vm05 (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900247+0000 mon.vm05 (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900251+0000 mon.vm05 (mon.0) 3 : cluster [DBG] fsid f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900251+0000 mon.vm05 (mon.0) 3 : cluster [DBG] fsid f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900254+0000 mon.vm05 (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T07:16:38.694276+0000 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900254+0000 mon.vm05 (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T07:16:38.694276+0000 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900263+0000 mon.vm05 (mon.0) 5 : cluster [DBG] created 2026-03-10T07:16:38.694276+0000 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900263+0000 mon.vm05 (mon.0) 5 : cluster [DBG] created 2026-03-10T07:16:38.694276+0000 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900267+0000 mon.vm05 (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900267+0000 mon.vm05 (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900269+0000 mon.vm05 (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900269+0000 mon.vm05 (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900272+0000 mon.vm05 (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900272+0000 mon.vm05 (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T07:16:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900553+0000 mon.vm05 (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T07:16:41.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900553+0000 mon.vm05 (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T07:16:41.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900566+0000 mon.vm05 (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T07:16:41.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.900566+0000 mon.vm05 (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T07:16:41.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.901055+0000 mon.vm05 (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T07:16:41.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:40 vm05 bash[17520]: cluster 2026-03-10T07:16:40.901055+0000 mon.vm05 (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T07:16:41.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:41 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:16:41.343 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e.target.wants/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mgr.vm05.wnsmpp.service → /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service. 2026-03-10T07:16:41.350 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-10T07:16:41.350 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T07:16:41.350 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-10T07:16:41.351 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to open ports <[9283, 8765, 8443]>. firewalld.service is not available 2026-03-10T07:16:41.351 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr to start... 2026-03-10T07:16:41.351 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr... 2026-03-10T07:16:41.599 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:41 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:16:41.655 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "f0f57d3c-1c50-11f1-837e-f755e850132e", 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "vm05" 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T07:16:39:906868+0000", 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:16:41.656 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T07:16:39.907745+0000", 2026-03-10T07:16:41.657 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:16:41.657 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:41.657 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T07:16:41.657 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:16:41.657 INFO:teuthology.orchestra.run.vm05.stdout:mgr not available, waiting (1/15)... 2026-03-10T07:16:42.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:41 vm05 bash[17520]: audit 2026-03-10T07:16:40.973472+0000 mon.vm05 (mon.0) 12 : audit [INF] from='client.? 192.168.123.105:0/1622097596' entity='client.admin' 2026-03-10T07:16:42.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:41 vm05 bash[17520]: audit 2026-03-10T07:16:40.973472+0000 mon.vm05 (mon.0) 12 : audit [INF] from='client.? 192.168.123.105:0/1622097596' entity='client.admin' 2026-03-10T07:16:42.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:41 vm05 bash[17520]: audit 2026-03-10T07:16:41.608855+0000 mon.vm05 (mon.0) 13 : audit [DBG] from='client.? 192.168.123.105:0/3812699489' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:16:42.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:41 vm05 bash[17520]: audit 2026-03-10T07:16:41.608855+0000 mon.vm05 (mon.0) 13 : audit [DBG] from='client.? 192.168.123.105:0/3812699489' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "f0f57d3c-1c50-11f1-837e-f755e850132e", 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "vm05" 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 2, 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T07:16:43.963 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:16:43.964 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T07:16:43.964 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T07:16:43.964 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:43.964 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T07:16:43.964 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:16:43.964 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T07:16:43.964 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T07:16:43.964 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T07:16:43.964 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T07:16:43.964 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T07:16:43.964 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T07:16:39:906868+0000", 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T07:16:39.907745+0000", 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:16:43.965 INFO:teuthology.orchestra.run.vm05.stdout:mgr not available, waiting (2/15)... 2026-03-10T07:16:44.134 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:43 vm05 bash[17520]: audit 2026-03-10T07:16:43.878717+0000 mon.vm05 (mon.0) 14 : audit [DBG] from='client.? 192.168.123.105:0/2077111326' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:16:44.134 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:43 vm05 bash[17520]: audit 2026-03-10T07:16:43.878717+0000 mon.vm05 (mon.0) 14 : audit [DBG] from='client.? 192.168.123.105:0/2077111326' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:16:45.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: cluster 2026-03-10T07:16:44.898665+0000 mon.vm05 (mon.0) 15 : cluster [INF] Activating manager daemon vm05.wnsmpp 2026-03-10T07:16:45.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: cluster 2026-03-10T07:16:44.898665+0000 mon.vm05 (mon.0) 15 : cluster [INF] Activating manager daemon vm05.wnsmpp 2026-03-10T07:16:45.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: cluster 2026-03-10T07:16:44.929260+0000 mon.vm05 (mon.0) 16 : cluster [DBG] mgrmap e2: vm05.wnsmpp(active, starting, since 0.0306819s) 2026-03-10T07:16:45.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: cluster 2026-03-10T07:16:44.929260+0000 mon.vm05 (mon.0) 16 : cluster [DBG] mgrmap e2: vm05.wnsmpp(active, starting, since 0.0306819s) 2026-03-10T07:16:45.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: audit 2026-03-10T07:16:44.929784+0000 mon.vm05 (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:16:45.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: audit 2026-03-10T07:16:44.929784+0000 mon.vm05 (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:16:45.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: audit 2026-03-10T07:16:44.930313+0000 mon.vm05 (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:16:45.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: audit 2026-03-10T07:16:44.930313+0000 mon.vm05 (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:16:45.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: audit 2026-03-10T07:16:44.930370+0000 mon.vm05 (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:16:45.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: audit 2026-03-10T07:16:44.930370+0000 mon.vm05 (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:16:45.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: audit 2026-03-10T07:16:44.930740+0000 mon.vm05 (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:16:45.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: audit 2026-03-10T07:16:44.930740+0000 mon.vm05 (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:16:45.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: audit 2026-03-10T07:16:44.931468+0000 mon.vm05 (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr metadata", "who": "vm05.wnsmpp", "id": "vm05.wnsmpp"}]: dispatch 2026-03-10T07:16:45.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:44 vm05 bash[17520]: audit 2026-03-10T07:16:44.931468+0000 mon.vm05 (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr metadata", "who": "vm05.wnsmpp", "id": "vm05.wnsmpp"}]: dispatch 2026-03-10T07:16:46.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:45 vm05 bash[17520]: cluster 2026-03-10T07:16:44.944444+0000 mon.vm05 (mon.0) 22 : cluster [INF] Manager daemon vm05.wnsmpp is now available 2026-03-10T07:16:46.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:45 vm05 bash[17520]: cluster 2026-03-10T07:16:44.944444+0000 mon.vm05 (mon.0) 22 : cluster [INF] Manager daemon vm05.wnsmpp is now available 2026-03-10T07:16:46.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:45 vm05 bash[17520]: audit 2026-03-10T07:16:44.957544+0000 mon.vm05 (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:46.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:45 vm05 bash[17520]: audit 2026-03-10T07:16:44.957544+0000 mon.vm05 (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:46.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:45 vm05 bash[17520]: audit 2026-03-10T07:16:44.961582+0000 mon.vm05 (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:46.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:45 vm05 bash[17520]: audit 2026-03-10T07:16:44.961582+0000 mon.vm05 (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:46.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:45 vm05 bash[17520]: audit 2026-03-10T07:16:44.961723+0000 mon.vm05 (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:16:46.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:45 vm05 bash[17520]: audit 2026-03-10T07:16:44.961723+0000 mon.vm05 (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:16:46.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:45 vm05 bash[17520]: audit 2026-03-10T07:16:44.964579+0000 mon.vm05 (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:46.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:45 vm05 bash[17520]: audit 2026-03-10T07:16:44.964579+0000 mon.vm05 (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:46.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:45 vm05 bash[17520]: audit 2026-03-10T07:16:44.966807+0000 mon.vm05 (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/trash_purge_schedule"}]: dispatch 2026-03-10T07:16:46.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:45 vm05 bash[17520]: audit 2026-03-10T07:16:44.966807+0000 mon.vm05 (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.105:0/2621356470' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/trash_purge_schedule"}]: dispatch 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "f0f57d3c-1c50-11f1-837e-f755e850132e", 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "vm05" 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T07:16:46.285 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T07:16:39:906868+0000", 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T07:16:46.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T07:16:39.907745+0000", 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:16:46.287 INFO:teuthology.orchestra.run.vm05.stdout:mgr is available 2026-03-10T07:16:46.543 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout fsid = f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.105:3300,v1:192.168.123.105:6789] 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T07:16:46.544 INFO:teuthology.orchestra.run.vm05.stdout:Enabling cephadm module... 2026-03-10T07:16:47.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:46 vm05 bash[17520]: cluster 2026-03-10T07:16:45.945920+0000 mon.vm05 (mon.0) 28 : cluster [DBG] mgrmap e3: vm05.wnsmpp(active, since 1.04733s) 2026-03-10T07:16:47.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:46 vm05 bash[17520]: cluster 2026-03-10T07:16:45.945920+0000 mon.vm05 (mon.0) 28 : cluster [DBG] mgrmap e3: vm05.wnsmpp(active, since 1.04733s) 2026-03-10T07:16:47.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:46 vm05 bash[17520]: audit 2026-03-10T07:16:46.249042+0000 mon.vm05 (mon.0) 29 : audit [DBG] from='client.? 192.168.123.105:0/1118681063' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:16:47.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:46 vm05 bash[17520]: audit 2026-03-10T07:16:46.249042+0000 mon.vm05 (mon.0) 29 : audit [DBG] from='client.? 192.168.123.105:0/1118681063' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:16:47.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:46 vm05 bash[17520]: audit 2026-03-10T07:16:46.503468+0000 mon.vm05 (mon.0) 30 : audit [INF] from='client.? 192.168.123.105:0/3419681476' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T07:16:47.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:46 vm05 bash[17520]: audit 2026-03-10T07:16:46.503468+0000 mon.vm05 (mon.0) 30 : audit [INF] from='client.? 192.168.123.105:0/3419681476' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T07:16:47.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:46 vm05 bash[17520]: audit 2026-03-10T07:16:46.786865+0000 mon.vm05 (mon.0) 31 : audit [INF] from='client.? 192.168.123.105:0/792377302' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T07:16:47.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:46 vm05 bash[17520]: audit 2026-03-10T07:16:46.786865+0000 mon.vm05 (mon.0) 31 : audit [INF] from='client.? 192.168.123.105:0/792377302' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T07:16:47.360 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:16:47.360 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-10T07:16:47.360 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T07:16:47.360 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "active_name": "vm05.wnsmpp", 2026-03-10T07:16:47.360 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T07:16:47.360 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:16:47.360 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for the mgr to restart... 2026-03-10T07:16:47.360 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr epoch 4... 2026-03-10T07:16:48.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:47 vm05 bash[17520]: audit 2026-03-10T07:16:46.953988+0000 mon.vm05 (mon.0) 32 : audit [INF] from='client.? 192.168.123.105:0/792377302' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T07:16:48.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:47 vm05 bash[17520]: audit 2026-03-10T07:16:46.953988+0000 mon.vm05 (mon.0) 32 : audit [INF] from='client.? 192.168.123.105:0/792377302' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T07:16:48.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:47 vm05 bash[17520]: cluster 2026-03-10T07:16:46.959235+0000 mon.vm05 (mon.0) 33 : cluster [DBG] mgrmap e4: vm05.wnsmpp(active, since 2s) 2026-03-10T07:16:48.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:47 vm05 bash[17520]: cluster 2026-03-10T07:16:46.959235+0000 mon.vm05 (mon.0) 33 : cluster [DBG] mgrmap e4: vm05.wnsmpp(active, since 2s) 2026-03-10T07:16:48.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:47 vm05 bash[17520]: audit 2026-03-10T07:16:47.288627+0000 mon.vm05 (mon.0) 34 : audit [DBG] from='client.? 192.168.123.105:0/3529731369' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:16:48.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:47 vm05 bash[17520]: audit 2026-03-10T07:16:47.288627+0000 mon.vm05 (mon.0) 34 : audit [DBG] from='client.? 192.168.123.105:0/3529731369' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:16:50.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: cluster 2026-03-10T07:16:50.364246+0000 mon.vm05 (mon.0) 35 : cluster [INF] Active manager daemon vm05.wnsmpp restarted 2026-03-10T07:16:50.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: cluster 2026-03-10T07:16:50.364246+0000 mon.vm05 (mon.0) 35 : cluster [INF] Active manager daemon vm05.wnsmpp restarted 2026-03-10T07:16:50.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: cluster 2026-03-10T07:16:50.364496+0000 mon.vm05 (mon.0) 36 : cluster [INF] Activating manager daemon vm05.wnsmpp 2026-03-10T07:16:50.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: cluster 2026-03-10T07:16:50.364496+0000 mon.vm05 (mon.0) 36 : cluster [INF] Activating manager daemon vm05.wnsmpp 2026-03-10T07:16:50.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: cluster 2026-03-10T07:16:50.369552+0000 mon.vm05 (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T07:16:50.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: cluster 2026-03-10T07:16:50.369552+0000 mon.vm05 (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: cluster 2026-03-10T07:16:50.369900+0000 mon.vm05 (mon.0) 38 : cluster [DBG] mgrmap e5: vm05.wnsmpp(active, starting, since 0.00552034s) 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: cluster 2026-03-10T07:16:50.369900+0000 mon.vm05 (mon.0) 38 : cluster [DBG] mgrmap e5: vm05.wnsmpp(active, starting, since 0.00552034s) 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.372101+0000 mon.vm05 (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.372101+0000 mon.vm05 (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.372199+0000 mon.vm05 (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr metadata", "who": "vm05.wnsmpp", "id": "vm05.wnsmpp"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.372199+0000 mon.vm05 (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr metadata", "who": "vm05.wnsmpp", "id": "vm05.wnsmpp"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.373849+0000 mon.vm05 (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.373849+0000 mon.vm05 (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.374020+0000 mon.vm05 (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.374020+0000 mon.vm05 (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.374174+0000 mon.vm05 (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.374174+0000 mon.vm05 (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: cluster 2026-03-10T07:16:50.380741+0000 mon.vm05 (mon.0) 44 : cluster [INF] Manager daemon vm05.wnsmpp is now available 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: cluster 2026-03-10T07:16:50.380741+0000 mon.vm05 (mon.0) 44 : cluster [INF] Manager daemon vm05.wnsmpp is now available 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.390243+0000 mon.vm05 (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.390243+0000 mon.vm05 (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.393421+0000 mon.vm05 (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.393421+0000 mon.vm05 (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.404698+0000 mon.vm05 (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.404698+0000 mon.vm05 (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.407312+0000 mon.vm05 (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.407312+0000 mon.vm05 (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.408478+0000 mon.vm05 (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:16:50.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:50 vm05 bash[17520]: audit 2026-03-10T07:16:50.408478+0000 mon.vm05 (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:16:51.431 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:16:51.431 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-10T07:16:51.431 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T07:16:51.431 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:16:51.431 INFO:teuthology.orchestra.run.vm05.stdout:mgr epoch 4 is available 2026-03-10T07:16:51.431 INFO:teuthology.orchestra.run.vm05.stdout:Setting orchestrator backend to cephadm... 2026-03-10T07:16:51.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:51 vm05 bash[17520]: cephadm 2026-03-10T07:16:50.387338+0000 mgr.vm05.wnsmpp (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T07:16:51.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:51 vm05 bash[17520]: cephadm 2026-03-10T07:16:50.387338+0000 mgr.vm05.wnsmpp (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T07:16:51.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:51 vm05 bash[17520]: audit 2026-03-10T07:16:50.423237+0000 mon.vm05 (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/trash_purge_schedule"}]: dispatch 2026-03-10T07:16:51.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:51 vm05 bash[17520]: audit 2026-03-10T07:16:50.423237+0000 mon.vm05 (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/trash_purge_schedule"}]: dispatch 2026-03-10T07:16:51.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:51 vm05 bash[17520]: audit 2026-03-10T07:16:51.031064+0000 mon.vm05 (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:51.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:51 vm05 bash[17520]: audit 2026-03-10T07:16:51.031064+0000 mon.vm05 (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:51.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:51 vm05 bash[17520]: audit 2026-03-10T07:16:51.033838+0000 mon.vm05 (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:51.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:51 vm05 bash[17520]: audit 2026-03-10T07:16:51.033838+0000 mon.vm05 (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:51.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:51 vm05 bash[17520]: cluster 2026-03-10T07:16:51.374556+0000 mon.vm05 (mon.0) 53 : cluster [DBG] mgrmap e6: vm05.wnsmpp(active, since 1.01017s) 2026-03-10T07:16:51.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:51 vm05 bash[17520]: cluster 2026-03-10T07:16:51.374556+0000 mon.vm05 (mon.0) 53 : cluster [DBG] mgrmap e6: vm05.wnsmpp(active, since 1.01017s) 2026-03-10T07:16:52.043 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T07:16:52.043 INFO:teuthology.orchestra.run.vm05.stdout:Generating ssh key... 2026-03-10T07:16:52.606 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP1n1QO2wP151RwRIlN4GOVML6VT3rzAS72dqHYLqcGm ceph-f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:16:52.606 INFO:teuthology.orchestra.run.vm05.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T07:16:52.606 INFO:teuthology.orchestra.run.vm05.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T07:16:52.606 INFO:teuthology.orchestra.run.vm05.stdout:Adding host vm05... 2026-03-10T07:16:52.838 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:51.375499+0000 mgr.vm05.wnsmpp (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:51.375499+0000 mgr.vm05.wnsmpp (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:51.382193+0000 mgr.vm05.wnsmpp (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:51.382193+0000 mgr.vm05.wnsmpp (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: cephadm 2026-03-10T07:16:51.672603+0000 mgr.vm05.wnsmpp (mgr.14118) 4 : cephadm [INF] [10/Mar/2026:07:16:51] ENGINE Bus STARTING 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: cephadm 2026-03-10T07:16:51.672603+0000 mgr.vm05.wnsmpp (mgr.14118) 4 : cephadm [INF] [10/Mar/2026:07:16:51] ENGINE Bus STARTING 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:51.725094+0000 mgr.vm05.wnsmpp (mgr.14118) 5 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:51.725094+0000 mgr.vm05.wnsmpp (mgr.14118) 5 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:51.729146+0000 mon.vm05 (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:51.729146+0000 mon.vm05 (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:51.736727+0000 mon.vm05 (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:51.736727+0000 mon.vm05 (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: cephadm 2026-03-10T07:16:51.784645+0000 mgr.vm05.wnsmpp (mgr.14118) 6 : cephadm [INF] [10/Mar/2026:07:16:51] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: cephadm 2026-03-10T07:16:51.784645+0000 mgr.vm05.wnsmpp (mgr.14118) 6 : cephadm [INF] [10/Mar/2026:07:16:51] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: cephadm 2026-03-10T07:16:51.785532+0000 mgr.vm05.wnsmpp (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:07:16:51] ENGINE Client ('192.168.123.105', 38100) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: cephadm 2026-03-10T07:16:51.785532+0000 mgr.vm05.wnsmpp (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:07:16:51] ENGINE Client ('192.168.123.105', 38100) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: cephadm 2026-03-10T07:16:51.885627+0000 mgr.vm05.wnsmpp (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:07:16:51] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: cephadm 2026-03-10T07:16:51.885627+0000 mgr.vm05.wnsmpp (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:07:16:51] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: cephadm 2026-03-10T07:16:51.885720+0000 mgr.vm05.wnsmpp (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:07:16:51] ENGINE Bus STARTED 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: cephadm 2026-03-10T07:16:51.885720+0000 mgr.vm05.wnsmpp (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:07:16:51] ENGINE Bus STARTED 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:51.886491+0000 mon.vm05 (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:51.886491+0000 mon.vm05 (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:52.287375+0000 mon.vm05 (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:52.287375+0000 mon.vm05 (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:52.290002+0000 mon.vm05 (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:52.839 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:52 vm05 bash[17520]: audit 2026-03-10T07:16:52.290002+0000 mon.vm05 (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:53 vm05 bash[17520]: audit 2026-03-10T07:16:52.008796+0000 mgr.vm05.wnsmpp (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:53 vm05 bash[17520]: audit 2026-03-10T07:16:52.008796+0000 mgr.vm05.wnsmpp (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:53 vm05 bash[17520]: audit 2026-03-10T07:16:52.270135+0000 mgr.vm05.wnsmpp (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:53 vm05 bash[17520]: audit 2026-03-10T07:16:52.270135+0000 mgr.vm05.wnsmpp (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:53 vm05 bash[17520]: cephadm 2026-03-10T07:16:52.270376+0000 mgr.vm05.wnsmpp (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T07:16:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:53 vm05 bash[17520]: cephadm 2026-03-10T07:16:52.270376+0000 mgr.vm05.wnsmpp (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T07:16:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:53 vm05 bash[17520]: audit 2026-03-10T07:16:52.566912+0000 mgr.vm05.wnsmpp (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:53 vm05 bash[17520]: audit 2026-03-10T07:16:52.566912+0000 mgr.vm05.wnsmpp (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:53 vm05 bash[17520]: audit 2026-03-10T07:16:52.831329+0000 mgr.vm05.wnsmpp (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "addr": "192.168.123.105", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:53 vm05 bash[17520]: audit 2026-03-10T07:16:52.831329+0000 mgr.vm05.wnsmpp (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "addr": "192.168.123.105", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:53 vm05 bash[17520]: cluster 2026-03-10T07:16:53.293309+0000 mon.vm05 (mon.0) 59 : cluster [DBG] mgrmap e7: vm05.wnsmpp(active, since 2s) 2026-03-10T07:16:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:53 vm05 bash[17520]: cluster 2026-03-10T07:16:53.293309+0000 mon.vm05 (mon.0) 59 : cluster [DBG] mgrmap e7: vm05.wnsmpp(active, since 2s) 2026-03-10T07:16:54.801 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Added host 'vm05' with addr '192.168.123.105' 2026-03-10T07:16:54.801 INFO:teuthology.orchestra.run.vm05.stdout:Deploying mon service with default placement... 2026-03-10T07:16:55.046 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:54 vm05 bash[17520]: cephadm 2026-03-10T07:16:53.439982+0000 mgr.vm05.wnsmpp (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-10T07:16:55.046 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:54 vm05 bash[17520]: cephadm 2026-03-10T07:16:53.439982+0000 mgr.vm05.wnsmpp (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-10T07:16:55.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T07:16:55.080 INFO:teuthology.orchestra.run.vm05.stdout:Deploying mgr service with default placement... 2026-03-10T07:16:55.361 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T07:16:55.361 INFO:teuthology.orchestra.run.vm05.stdout:Deploying crash service with default placement... 2026-03-10T07:16:55.638 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled crash update... 2026-03-10T07:16:55.639 INFO:teuthology.orchestra.run.vm05.stdout:Deploying ceph-exporter service with default placement... 2026-03-10T07:16:55.886 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:55 vm05 bash[17520]: audit 2026-03-10T07:16:54.743568+0000 mon.vm05 (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:55.886 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:55 vm05 bash[17520]: audit 2026-03-10T07:16:54.743568+0000 mon.vm05 (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:55.886 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:55 vm05 bash[17520]: cephadm 2026-03-10T07:16:54.743918+0000 mgr.vm05.wnsmpp (mgr.14118) 16 : cephadm [INF] Added host vm05 2026-03-10T07:16:55.886 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:55 vm05 bash[17520]: cephadm 2026-03-10T07:16:54.743918+0000 mgr.vm05.wnsmpp (mgr.14118) 16 : cephadm [INF] Added host vm05 2026-03-10T07:16:55.886 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:55 vm05 bash[17520]: audit 2026-03-10T07:16:54.746665+0000 mon.vm05 (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:16:55.886 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:55 vm05 bash[17520]: audit 2026-03-10T07:16:54.746665+0000 mon.vm05 (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:16:55.886 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:55 vm05 bash[17520]: audit 2026-03-10T07:16:55.031301+0000 mon.vm05 (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:55.886 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:55 vm05 bash[17520]: audit 2026-03-10T07:16:55.031301+0000 mon.vm05 (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:55.886 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:55 vm05 bash[17520]: audit 2026-03-10T07:16:55.324313+0000 mon.vm05 (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:55.886 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:55 vm05 bash[17520]: audit 2026-03-10T07:16:55.324313+0000 mon.vm05 (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:55.886 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:55 vm05 bash[17520]: audit 2026-03-10T07:16:55.602186+0000 mon.vm05 (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:55.886 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:55 vm05 bash[17520]: audit 2026-03-10T07:16:55.602186+0000 mon.vm05 (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:55.916 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled ceph-exporter update... 2026-03-10T07:16:55.916 INFO:teuthology.orchestra.run.vm05.stdout:Deploying prometheus service with default placement... 2026-03-10T07:16:56.316 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled prometheus update... 2026-03-10T07:16:56.316 INFO:teuthology.orchestra.run.vm05.stdout:Deploying grafana service with default placement... 2026-03-10T07:16:56.662 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled grafana update... 2026-03-10T07:16:56.662 INFO:teuthology.orchestra.run.vm05.stdout:Deploying node-exporter service with default placement... 2026-03-10T07:16:56.878 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:55.027471+0000 mgr.vm05.wnsmpp (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:56.878 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:55.027471+0000 mgr.vm05.wnsmpp (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:56.878 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: cephadm 2026-03-10T07:16:55.028377+0000 mgr.vm05.wnsmpp (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T07:16:56.878 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: cephadm 2026-03-10T07:16:55.028377+0000 mgr.vm05.wnsmpp (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T07:16:56.878 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:55.321072+0000 mgr.vm05.wnsmpp (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:56.878 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:55.321072+0000 mgr.vm05.wnsmpp (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:56.878 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: cephadm 2026-03-10T07:16:55.321751+0000 mgr.vm05.wnsmpp (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: cephadm 2026-03-10T07:16:55.321751+0000 mgr.vm05.wnsmpp (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:55.598760+0000 mgr.vm05.wnsmpp (mgr.14118) 21 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:55.598760+0000 mgr.vm05.wnsmpp (mgr.14118) 21 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: cephadm 2026-03-10T07:16:55.599437+0000 mgr.vm05.wnsmpp (mgr.14118) 22 : cephadm [INF] Saving service crash spec with placement * 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: cephadm 2026-03-10T07:16:55.599437+0000 mgr.vm05.wnsmpp (mgr.14118) 22 : cephadm [INF] Saving service crash spec with placement * 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:55.872965+0000 mgr.vm05.wnsmpp (mgr.14118) 23 : audit [DBG] from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:55.872965+0000 mgr.vm05.wnsmpp (mgr.14118) 23 : audit [DBG] from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: cephadm 2026-03-10T07:16:55.873727+0000 mgr.vm05.wnsmpp (mgr.14118) 24 : cephadm [INF] Saving service ceph-exporter spec with placement * 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: cephadm 2026-03-10T07:16:55.873727+0000 mgr.vm05.wnsmpp (mgr.14118) 24 : cephadm [INF] Saving service ceph-exporter spec with placement * 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:55.876596+0000 mon.vm05 (mon.0) 65 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:55.876596+0000 mon.vm05 (mon.0) 65 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:56.208631+0000 mon.vm05 (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:56.208631+0000 mon.vm05 (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:56.450027+0000 mon.vm05 (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:56.450027+0000 mon.vm05 (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:56.601698+0000 mon.vm05 (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:56.601698+0000 mon.vm05 (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:56.802231+0000 mon.vm05 (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:56.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:56 vm05 bash[17520]: audit 2026-03-10T07:16:56.802231+0000 mon.vm05 (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:57.004 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled node-exporter update... 2026-03-10T07:16:57.004 INFO:teuthology.orchestra.run.vm05.stdout:Deploying alertmanager service with default placement... 2026-03-10T07:16:57.297 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled alertmanager update... 2026-03-10T07:16:57.823 INFO:teuthology.orchestra.run.vm05.stdout:Enabling the dashboard module... 2026-03-10T07:16:58.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:56.183984+0000 mgr.vm05.wnsmpp (mgr.14118) 25 : audit [DBG] from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:56.183984+0000 mgr.vm05.wnsmpp (mgr.14118) 25 : audit [DBG] from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: cephadm 2026-03-10T07:16:56.184671+0000 mgr.vm05.wnsmpp (mgr.14118) 26 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: cephadm 2026-03-10T07:16:56.184671+0000 mgr.vm05.wnsmpp (mgr.14118) 26 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:56.598088+0000 mgr.vm05.wnsmpp (mgr.14118) 27 : audit [DBG] from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:56.598088+0000 mgr.vm05.wnsmpp (mgr.14118) 27 : audit [DBG] from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: cephadm 2026-03-10T07:16:56.598893+0000 mgr.vm05.wnsmpp (mgr.14118) 28 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: cephadm 2026-03-10T07:16:56.598893+0000 mgr.vm05.wnsmpp (mgr.14118) 28 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:56.933291+0000 mgr.vm05.wnsmpp (mgr.14118) 29 : audit [DBG] from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:56.933291+0000 mgr.vm05.wnsmpp (mgr.14118) 29 : audit [DBG] from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: cephadm 2026-03-10T07:16:56.933995+0000 mgr.vm05.wnsmpp (mgr.14118) 30 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: cephadm 2026-03-10T07:16:56.933995+0000 mgr.vm05.wnsmpp (mgr.14118) 30 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:56.941497+0000 mon.vm05 (mon.0) 70 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:56.941497+0000 mon.vm05 (mon.0) 70 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:57.253091+0000 mon.vm05 (mon.0) 71 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:57.253091+0000 mon.vm05 (mon.0) 71 : audit [INF] from='mgr.14118 192.168.123.105:0/1989325108' entity='mgr.vm05.wnsmpp' 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:57.518126+0000 mon.vm05 (mon.0) 72 : audit [INF] from='client.? 192.168.123.105:0/1724550922' entity='client.admin' 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:57.518126+0000 mon.vm05 (mon.0) 72 : audit [INF] from='client.? 192.168.123.105:0/1724550922' entity='client.admin' 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:57.780248+0000 mon.vm05 (mon.0) 73 : audit [INF] from='client.? 192.168.123.105:0/1059343211' entity='client.admin' 2026-03-10T07:16:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:57 vm05 bash[17520]: audit 2026-03-10T07:16:57.780248+0000 mon.vm05 (mon.0) 73 : audit [INF] from='client.? 192.168.123.105:0/1059343211' entity='client.admin' 2026-03-10T07:16:59.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:58 vm05 bash[17520]: audit 2026-03-10T07:16:57.249632+0000 mgr.vm05.wnsmpp (mgr.14118) 31 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:59.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:58 vm05 bash[17520]: audit 2026-03-10T07:16:57.249632+0000 mgr.vm05.wnsmpp (mgr.14118) 31 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:16:59.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:58 vm05 bash[17520]: cephadm 2026-03-10T07:16:57.250439+0000 mgr.vm05.wnsmpp (mgr.14118) 32 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-10T07:16:59.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:58 vm05 bash[17520]: cephadm 2026-03-10T07:16:57.250439+0000 mgr.vm05.wnsmpp (mgr.14118) 32 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-10T07:16:59.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:58 vm05 bash[17520]: audit 2026-03-10T07:16:58.077325+0000 mon.vm05 (mon.0) 74 : audit [INF] from='client.? 192.168.123.105:0/3271523185' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T07:16:59.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:58 vm05 bash[17520]: audit 2026-03-10T07:16:58.077325+0000 mon.vm05 (mon.0) 74 : audit [INF] from='client.? 192.168.123.105:0/3271523185' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T07:16:59.387 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:16:59.387 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-10T07:16:59.387 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T07:16:59.387 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "active_name": "vm05.wnsmpp", 2026-03-10T07:16:59.387 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T07:16:59.387 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:16:59.387 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for the mgr to restart... 2026-03-10T07:16:59.387 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr epoch 8... 2026-03-10T07:17:00.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:59 vm05 bash[17520]: audit 2026-03-10T07:16:58.950103+0000 mon.vm05 (mon.0) 75 : audit [INF] from='client.? 192.168.123.105:0/3271523185' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T07:17:00.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:59 vm05 bash[17520]: audit 2026-03-10T07:16:58.950103+0000 mon.vm05 (mon.0) 75 : audit [INF] from='client.? 192.168.123.105:0/3271523185' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T07:17:00.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:59 vm05 bash[17520]: cluster 2026-03-10T07:16:58.956395+0000 mon.vm05 (mon.0) 76 : cluster [DBG] mgrmap e8: vm05.wnsmpp(active, since 8s) 2026-03-10T07:17:00.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:59 vm05 bash[17520]: cluster 2026-03-10T07:16:58.956395+0000 mon.vm05 (mon.0) 76 : cluster [DBG] mgrmap e8: vm05.wnsmpp(active, since 8s) 2026-03-10T07:17:00.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:59 vm05 bash[17520]: audit 2026-03-10T07:16:59.336323+0000 mon.vm05 (mon.0) 77 : audit [DBG] from='client.? 192.168.123.105:0/1669122472' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:17:00.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:16:59 vm05 bash[17520]: audit 2026-03-10T07:16:59.336323+0000 mon.vm05 (mon.0) 77 : audit [DBG] from='client.? 192.168.123.105:0/1669122472' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: cluster 2026-03-10T07:17:02.380909+0000 mon.vm05 (mon.0) 78 : cluster [INF] Active manager daemon vm05.wnsmpp restarted 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: cluster 2026-03-10T07:17:02.380909+0000 mon.vm05 (mon.0) 78 : cluster [INF] Active manager daemon vm05.wnsmpp restarted 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: cluster 2026-03-10T07:17:02.381338+0000 mon.vm05 (mon.0) 79 : cluster [INF] Activating manager daemon vm05.wnsmpp 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: cluster 2026-03-10T07:17:02.381338+0000 mon.vm05 (mon.0) 79 : cluster [INF] Activating manager daemon vm05.wnsmpp 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: cluster 2026-03-10T07:17:02.386943+0000 mon.vm05 (mon.0) 80 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: cluster 2026-03-10T07:17:02.386943+0000 mon.vm05 (mon.0) 80 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: cluster 2026-03-10T07:17:02.387121+0000 mon.vm05 (mon.0) 81 : cluster [DBG] mgrmap e9: vm05.wnsmpp(active, starting, since 0.00589733s) 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: cluster 2026-03-10T07:17:02.387121+0000 mon.vm05 (mon.0) 81 : cluster [DBG] mgrmap e9: vm05.wnsmpp(active, starting, since 0.00589733s) 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: audit 2026-03-10T07:17:02.389463+0000 mon.vm05 (mon.0) 82 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: audit 2026-03-10T07:17:02.389463+0000 mon.vm05 (mon.0) 82 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: audit 2026-03-10T07:17:02.389817+0000 mon.vm05 (mon.0) 83 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr metadata", "who": "vm05.wnsmpp", "id": "vm05.wnsmpp"}]: dispatch 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: audit 2026-03-10T07:17:02.389817+0000 mon.vm05 (mon.0) 83 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr metadata", "who": "vm05.wnsmpp", "id": "vm05.wnsmpp"}]: dispatch 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: audit 2026-03-10T07:17:02.390766+0000 mon.vm05 (mon.0) 84 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: audit 2026-03-10T07:17:02.390766+0000 mon.vm05 (mon.0) 84 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: audit 2026-03-10T07:17:02.391152+0000 mon.vm05 (mon.0) 85 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:17:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: audit 2026-03-10T07:17:02.391152+0000 mon.vm05 (mon.0) 85 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:17:02.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: audit 2026-03-10T07:17:02.391479+0000 mon.vm05 (mon.0) 86 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:17:02.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: audit 2026-03-10T07:17:02.391479+0000 mon.vm05 (mon.0) 86 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:17:02.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: cluster 2026-03-10T07:17:02.397348+0000 mon.vm05 (mon.0) 87 : cluster [INF] Manager daemon vm05.wnsmpp is now available 2026-03-10T07:17:02.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: cluster 2026-03-10T07:17:02.397348+0000 mon.vm05 (mon.0) 87 : cluster [INF] Manager daemon vm05.wnsmpp is now available 2026-03-10T07:17:02.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: audit 2026-03-10T07:17:02.413507+0000 mon.vm05 (mon.0) 88 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:17:02.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:02 vm05 bash[17520]: audit 2026-03-10T07:17:02.413507+0000 mon.vm05 (mon.0) 88 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:17:03.445 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:17:03.445 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-10T07:17:03.445 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T07:17:03.446 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:17:03.446 INFO:teuthology.orchestra.run.vm05.stdout:mgr epoch 8 is available 2026-03-10T07:17:03.446 INFO:teuthology.orchestra.run.vm05.stdout:Generating a dashboard self-signed certificate... 2026-03-10T07:17:03.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:03 vm05 bash[17520]: audit 2026-03-10T07:17:02.443293+0000 mon.vm05 (mon.0) 89 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:17:03.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:03 vm05 bash[17520]: audit 2026-03-10T07:17:02.443293+0000 mon.vm05 (mon.0) 89 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:17:03.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:03 vm05 bash[17520]: audit 2026-03-10T07:17:02.444429+0000 mon.vm05 (mon.0) 90 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/trash_purge_schedule"}]: dispatch 2026-03-10T07:17:03.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:03 vm05 bash[17520]: audit 2026-03-10T07:17:02.444429+0000 mon.vm05 (mon.0) 90 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/trash_purge_schedule"}]: dispatch 2026-03-10T07:17:03.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:03 vm05 bash[17520]: cluster 2026-03-10T07:17:03.390234+0000 mon.vm05 (mon.0) 91 : cluster [DBG] mgrmap e10: vm05.wnsmpp(active, since 1.00902s) 2026-03-10T07:17:03.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:03 vm05 bash[17520]: cluster 2026-03-10T07:17:03.390234+0000 mon.vm05 (mon.0) 91 : cluster [DBG] mgrmap e10: vm05.wnsmpp(active, since 1.00902s) 2026-03-10T07:17:03.772 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T07:17:03.772 INFO:teuthology.orchestra.run.vm05.stdout:Creating initial admin user... 2026-03-10T07:17:04.196 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$0vDa04hVNDGx9er.dE/P3.GDg9dW3ZUlg.QIo45uIdp9gm9xk8FYi", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773127024, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T07:17:04.196 INFO:teuthology.orchestra.run.vm05.stdout:Fetching dashboard port number... 2026-03-10T07:17:04.467 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T07:17:04.467 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-10T07:17:04.467 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T07:17:04.468 INFO:teuthology.orchestra.run.vm05.stdout:Ceph Dashboard is now available at: 2026-03-10T07:17:04.468 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:17:04.468 INFO:teuthology.orchestra.run.vm05.stdout: URL: https://vm05.local:8443/ 2026-03-10T07:17:04.468 INFO:teuthology.orchestra.run.vm05.stdout: User: admin 2026-03-10T07:17:04.468 INFO:teuthology.orchestra.run.vm05.stdout: Password: b6098g3584 2026-03-10T07:17:04.468 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:17:04.468 INFO:teuthology.orchestra.run.vm05.stdout:Saving cluster configuration to /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config directory 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout: ceph telemetry on 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout:For more information see: 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:17:04.786 INFO:teuthology.orchestra.run.vm05.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T07:17:04.787 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:17:04.787 INFO:teuthology.orchestra.run.vm05.stdout:Bootstrap complete. 2026-03-10T07:17:04.806 INFO:tasks.cephadm:Fetching config... 2026-03-10T07:17:04.807 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T07:17:04.807 DEBUG:teuthology.orchestra.run.vm05:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T07:17:04.809 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T07:17:04.809 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T07:17:04.809 DEBUG:teuthology.orchestra.run.vm05:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T07:17:04.855 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T07:17:04.855 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T07:17:04.855 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/keyring of=/dev/stdout 2026-03-10T07:17:04.903 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T07:17:04.903 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T07:17:04.903 DEBUG:teuthology.orchestra.run.vm05:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T07:17:04.947 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T07:17:04.948 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP1n1QO2wP151RwRIlN4GOVML6VT3rzAS72dqHYLqcGm ceph-f0f57d3c-1c50-11f1-837e-f755e850132e' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: cephadm 2026-03-10T07:17:03.216211+0000 mgr.vm05.wnsmpp (mgr.14162) 1 : cephadm [INF] [10/Mar/2026:07:17:03] ENGINE Bus STARTING 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: cephadm 2026-03-10T07:17:03.216211+0000 mgr.vm05.wnsmpp (mgr.14162) 1 : cephadm [INF] [10/Mar/2026:07:17:03] ENGINE Bus STARTING 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: cephadm 2026-03-10T07:17:03.324714+0000 mgr.vm05.wnsmpp (mgr.14162) 2 : cephadm [INF] [10/Mar/2026:07:17:03] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: cephadm 2026-03-10T07:17:03.324714+0000 mgr.vm05.wnsmpp (mgr.14162) 2 : cephadm [INF] [10/Mar/2026:07:17:03] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: cephadm 2026-03-10T07:17:03.325358+0000 mgr.vm05.wnsmpp (mgr.14162) 3 : cephadm [INF] [10/Mar/2026:07:17:03] ENGINE Client ('192.168.123.105', 59326) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: cephadm 2026-03-10T07:17:03.325358+0000 mgr.vm05.wnsmpp (mgr.14162) 3 : cephadm [INF] [10/Mar/2026:07:17:03] ENGINE Client ('192.168.123.105', 59326) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:03.393028+0000 mgr.vm05.wnsmpp (mgr.14162) 4 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:03.393028+0000 mgr.vm05.wnsmpp (mgr.14162) 4 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:03.396849+0000 mgr.vm05.wnsmpp (mgr.14162) 5 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:03.396849+0000 mgr.vm05.wnsmpp (mgr.14162) 5 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: cephadm 2026-03-10T07:17:03.426084+0000 mgr.vm05.wnsmpp (mgr.14162) 6 : cephadm [INF] [10/Mar/2026:07:17:03] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: cephadm 2026-03-10T07:17:03.426084+0000 mgr.vm05.wnsmpp (mgr.14162) 6 : cephadm [INF] [10/Mar/2026:07:17:03] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: cephadm 2026-03-10T07:17:03.426461+0000 mgr.vm05.wnsmpp (mgr.14162) 7 : cephadm [INF] [10/Mar/2026:07:17:03] ENGINE Bus STARTED 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: cephadm 2026-03-10T07:17:03.426461+0000 mgr.vm05.wnsmpp (mgr.14162) 7 : cephadm [INF] [10/Mar/2026:07:17:03] ENGINE Bus STARTED 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:03.678974+0000 mgr.vm05.wnsmpp (mgr.14162) 8 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:03.678974+0000 mgr.vm05.wnsmpp (mgr.14162) 8 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:03.726551+0000 mon.vm05 (mon.0) 92 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:03.726551+0000 mon.vm05 (mon.0) 92 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:03.730462+0000 mon.vm05 (mon.0) 93 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:03.730462+0000 mon.vm05 (mon.0) 93 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:04.000627+0000 mgr.vm05.wnsmpp (mgr.14162) 9 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:04.000627+0000 mgr.vm05.wnsmpp (mgr.14162) 9 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:04.159239+0000 mon.vm05 (mon.0) 94 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:04.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:04.159239+0000 mon.vm05 (mon.0) 94 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:04.996 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:04.428741+0000 mon.vm05 (mon.0) 95 : audit [DBG] from='client.? 192.168.123.105:0/4000120572' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T07:17:04.996 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:04 vm05 bash[17520]: audit 2026-03-10T07:17:04.428741+0000 mon.vm05 (mon.0) 95 : audit [DBG] from='client.? 192.168.123.105:0/4000120572' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T07:17:05.000 INFO:teuthology.orchestra.run.vm05.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP1n1QO2wP151RwRIlN4GOVML6VT3rzAS72dqHYLqcGm ceph-f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:17:05.005 DEBUG:teuthology.orchestra.run.vm09:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP1n1QO2wP151RwRIlN4GOVML6VT3rzAS72dqHYLqcGm ceph-f0f57d3c-1c50-11f1-837e-f755e850132e' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T07:17:05.018 INFO:teuthology.orchestra.run.vm09.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP1n1QO2wP151RwRIlN4GOVML6VT3rzAS72dqHYLqcGm ceph-f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:17:05.022 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T07:17:06.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:05 vm05 bash[17520]: audit 2026-03-10T07:17:04.752779+0000 mon.vm05 (mon.0) 96 : audit [INF] from='client.? 192.168.123.105:0/3281647717' entity='client.admin' 2026-03-10T07:17:06.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:05 vm05 bash[17520]: audit 2026-03-10T07:17:04.752779+0000 mon.vm05 (mon.0) 96 : audit [INF] from='client.? 192.168.123.105:0/3281647717' entity='client.admin' 2026-03-10T07:17:06.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:05 vm05 bash[17520]: cluster 2026-03-10T07:17:05.163007+0000 mon.vm05 (mon.0) 97 : cluster [DBG] mgrmap e11: vm05.wnsmpp(active, since 2s) 2026-03-10T07:17:06.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:05 vm05 bash[17520]: cluster 2026-03-10T07:17:05.163007+0000 mon.vm05 (mon.0) 97 : cluster [DBG] mgrmap e11: vm05.wnsmpp(active, since 2s) 2026-03-10T07:17:08.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:08 vm05 bash[17520]: audit 2026-03-10T07:17:07.481452+0000 mon.vm05 (mon.0) 98 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:08.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:08 vm05 bash[17520]: audit 2026-03-10T07:17:07.481452+0000 mon.vm05 (mon.0) 98 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:08.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:08 vm05 bash[17520]: audit 2026-03-10T07:17:08.068853+0000 mon.vm05 (mon.0) 99 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:08.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:08 vm05 bash[17520]: audit 2026-03-10T07:17:08.068853+0000 mon.vm05 (mon.0) 99 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:09.314 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:17:09.634 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T07:17:09.634 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T07:17:10.327 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:10 vm05 bash[17520]: cluster 2026-03-10T07:17:09.072014+0000 mon.vm05 (mon.0) 100 : cluster [DBG] mgrmap e12: vm05.wnsmpp(active, since 6s) 2026-03-10T07:17:10.327 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:10 vm05 bash[17520]: cluster 2026-03-10T07:17:09.072014+0000 mon.vm05 (mon.0) 100 : cluster [DBG] mgrmap e12: vm05.wnsmpp(active, since 6s) 2026-03-10T07:17:10.327 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:10 vm05 bash[17520]: audit 2026-03-10T07:17:09.575939+0000 mon.vm05 (mon.0) 101 : audit [INF] from='client.? 192.168.123.105:0/2688269074' entity='client.admin' 2026-03-10T07:17:10.327 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:10 vm05 bash[17520]: audit 2026-03-10T07:17:09.575939+0000 mon.vm05 (mon.0) 101 : audit [INF] from='client.? 192.168.123.105:0/2688269074' entity='client.admin' 2026-03-10T07:17:14.253 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:17:14.645 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:14.645 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.777002+0000 mon.vm05 (mon.0) 102 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.777002+0000 mon.vm05 (mon.0) 102 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.780119+0000 mon.vm05 (mon.0) 103 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.780119+0000 mon.vm05 (mon.0) 103 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.780901+0000 mon.vm05 (mon.0) 104 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.780901+0000 mon.vm05 (mon.0) 104 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.783824+0000 mon.vm05 (mon.0) 105 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.783824+0000 mon.vm05 (mon.0) 105 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.784959+0000 mon.vm05 (mon.0) 106 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.784959+0000 mon.vm05 (mon.0) 106 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.785963+0000 mon.vm05 (mon.0) 107 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.785963+0000 mon.vm05 (mon.0) 107 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.787652+0000 mon.vm05 (mon.0) 108 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:13.787652+0000 mon.vm05 (mon.0) 108 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: cephadm 2026-03-10T07:17:13.788223+0000 mgr.vm05.wnsmpp (mgr.14162) 10 : cephadm [INF] Deploying daemon ceph-exporter.vm05 on vm05 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: cephadm 2026-03-10T07:17:13.788223+0000 mgr.vm05.wnsmpp (mgr.14162) 10 : cephadm [INF] Deploying daemon ceph-exporter.vm05 on vm05 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.607897+0000 mon.vm05 (mon.0) 109 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.607897+0000 mon.vm05 (mon.0) 109 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.610810+0000 mon.vm05 (mon.0) 110 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.610810+0000 mon.vm05 (mon.0) 110 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.613343+0000 mon.vm05 (mon.0) 111 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.613343+0000 mon.vm05 (mon.0) 111 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.615323+0000 mon.vm05 (mon.0) 112 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.897 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.615323+0000 mon.vm05 (mon.0) 112 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:14.897 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.615990+0000 mon.vm05 (mon.0) 113 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:17:14.897 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.615990+0000 mon.vm05 (mon.0) 113 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:17:14.897 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.617474+0000 mon.vm05 (mon.0) 114 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T07:17:14.897 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.617474+0000 mon.vm05 (mon.0) 114 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T07:17:14.897 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.618765+0000 mon.vm05 (mon.0) 115 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:14.897 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:14 vm05 bash[17520]: audit 2026-03-10T07:17:14.618765+0000 mon.vm05 (mon.0) 115 : audit [DBG] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:14.996 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm09 2026-03-10T07:17:14.996 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T07:17:14.996 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.conf 2026-03-10T07:17:14.999 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T07:17:15.000 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:17:15.044 INFO:tasks.cephadm:Adding host vm09 to orchestrator... 2026-03-10T07:17:15.044 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch host add vm09 2026-03-10T07:17:15.269 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:15.451 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: cephadm 2026-03-10T07:17:14.619316+0000 mgr.vm05.wnsmpp (mgr.14162) 11 : cephadm [INF] Deploying daemon crash.vm05 on vm05 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: cephadm 2026-03-10T07:17:14.619316+0000 mgr.vm05.wnsmpp (mgr.14162) 11 : cephadm [INF] Deploying daemon crash.vm05 on vm05 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: audit 2026-03-10T07:17:14.815422+0000 mgr.vm05.wnsmpp (mgr.14162) 12 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: audit 2026-03-10T07:17:14.815422+0000 mgr.vm05.wnsmpp (mgr.14162) 12 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: audit 2026-03-10T07:17:14.883102+0000 mon.vm05 (mon.0) 116 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: audit 2026-03-10T07:17:14.883102+0000 mon.vm05 (mon.0) 116 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: audit 2026-03-10T07:17:15.477079+0000 mon.vm05 (mon.0) 117 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: audit 2026-03-10T07:17:15.477079+0000 mon.vm05 (mon.0) 117 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: audit 2026-03-10T07:17:15.479586+0000 mon.vm05 (mon.0) 118 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: audit 2026-03-10T07:17:15.479586+0000 mon.vm05 (mon.0) 118 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: audit 2026-03-10T07:17:15.481687+0000 mon.vm05 (mon.0) 119 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: audit 2026-03-10T07:17:15.481687+0000 mon.vm05 (mon.0) 119 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: audit 2026-03-10T07:17:15.483992+0000 mon.vm05 (mon.0) 120 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 bash[17520]: audit 2026-03-10T07:17:15.483992+0000 mon.vm05 (mon.0) 120 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:15.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:15 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:16.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:16 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:17.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:16 vm05 bash[17520]: cephadm 2026-03-10T07:17:15.484822+0000 mgr.vm05.wnsmpp (mgr.14162) 13 : cephadm [INF] Deploying daemon node-exporter.vm05 on vm05 2026-03-10T07:17:17.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:16 vm05 bash[17520]: cephadm 2026-03-10T07:17:15.484822+0000 mgr.vm05.wnsmpp (mgr.14162) 13 : cephadm [INF] Deploying daemon node-exporter.vm05 on vm05 2026-03-10T07:17:17.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:16 vm05 bash[17520]: audit 2026-03-10T07:17:16.222371+0000 mon.vm05 (mon.0) 121 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:17.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:16 vm05 bash[17520]: audit 2026-03-10T07:17:16.222371+0000 mon.vm05 (mon.0) 121 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:17.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:16 vm05 bash[17520]: audit 2026-03-10T07:17:16.226266+0000 mon.vm05 (mon.0) 122 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:17.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:16 vm05 bash[17520]: audit 2026-03-10T07:17:16.226266+0000 mon.vm05 (mon.0) 122 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:17.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:16 vm05 bash[17520]: audit 2026-03-10T07:17:16.229148+0000 mon.vm05 (mon.0) 123 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:17.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:16 vm05 bash[17520]: audit 2026-03-10T07:17:16.229148+0000 mon.vm05 (mon.0) 123 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:17.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:16 vm05 bash[17520]: audit 2026-03-10T07:17:16.231646+0000 mon.vm05 (mon.0) 124 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:17.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:16 vm05 bash[17520]: audit 2026-03-10T07:17:16.231646+0000 mon.vm05 (mon.0) 124 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:18.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:17 vm05 bash[17520]: cephadm 2026-03-10T07:17:16.236946+0000 mgr.vm05.wnsmpp (mgr.14162) 14 : cephadm [INF] Deploying daemon alertmanager.vm05 on vm05 2026-03-10T07:17:18.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:17 vm05 bash[17520]: cephadm 2026-03-10T07:17:16.236946+0000 mgr.vm05.wnsmpp (mgr.14162) 14 : cephadm [INF] Deploying daemon alertmanager.vm05 on vm05 2026-03-10T07:17:18.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:17 vm05 bash[17520]: audit 2026-03-10T07:17:17.423324+0000 mon.vm05 (mon.0) 125 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:18.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:17 vm05 bash[17520]: audit 2026-03-10T07:17:17.423324+0000 mon.vm05 (mon.0) 125 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:19.688 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:17:20.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:20 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:21.278 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:21 vm05 bash[17520]: audit 2026-03-10T07:17:20.097247+0000 mgr.vm05.wnsmpp (mgr.14162) 15 : audit [DBG] from='client.14187 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:17:21.278 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:21 vm05 bash[17520]: audit 2026-03-10T07:17:20.097247+0000 mgr.vm05.wnsmpp (mgr.14162) 15 : audit [DBG] from='client.14187 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:17:21.278 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:21 vm05 bash[17520]: cephadm 2026-03-10T07:17:20.651772+0000 mgr.vm05.wnsmpp (mgr.14162) 16 : cephadm [INF] Deploying cephadm binary to vm09 2026-03-10T07:17:21.278 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:21 vm05 bash[17520]: cephadm 2026-03-10T07:17:20.651772+0000 mgr.vm05.wnsmpp (mgr.14162) 16 : cephadm [INF] Deploying cephadm binary to vm09 2026-03-10T07:17:21.278 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:21 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:22.158 INFO:teuthology.orchestra.run.vm05.stdout:Added host 'vm09' with addr '192.168.123.109' 2026-03-10T07:17:22.225 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch host ls --format=json 2026-03-10T07:17:22.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.531661+0000 mon.vm05 (mon.0) 126 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.531661+0000 mon.vm05 (mon.0) 126 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.534246+0000 mon.vm05 (mon.0) 127 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.534246+0000 mon.vm05 (mon.0) 127 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.536676+0000 mon.vm05 (mon.0) 128 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.536676+0000 mon.vm05 (mon.0) 128 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.538615+0000 mon.vm05 (mon.0) 129 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.538615+0000 mon.vm05 (mon.0) 129 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.542086+0000 mon.vm05 (mon.0) 130 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.542086+0000 mon.vm05 (mon.0) 130 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.543671+0000 mon.vm05 (mon.0) 131 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.543671+0000 mon.vm05 (mon.0) 131 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: cephadm 2026-03-10T07:17:21.547497+0000 mgr.vm05.wnsmpp (mgr.14162) 17 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: cephadm 2026-03-10T07:17:21.547497+0000 mgr.vm05.wnsmpp (mgr.14162) 17 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.570849+0000 mon.vm05 (mon.0) 132 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.570849+0000 mon.vm05 (mon.0) 132 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.573901+0000 mon.vm05 (mon.0) 133 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.573901+0000 mon.vm05 (mon.0) 133 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.575182+0000 mon.vm05 (mon.0) 134 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.575182+0000 mon.vm05 (mon.0) 134 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.575469+0000 mgr.vm05.wnsmpp (mgr.14162) 18 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.575469+0000 mgr.vm05.wnsmpp (mgr.14162) 18 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.577479+0000 mon.vm05 (mon.0) 135 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:21.577479+0000 mon.vm05 (mon.0) 135 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: cephadm 2026-03-10T07:17:21.583624+0000 mgr.vm05.wnsmpp (mgr.14162) 19 : cephadm [INF] Deploying daemon grafana.vm05 on vm05 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: cephadm 2026-03-10T07:17:21.583624+0000 mgr.vm05.wnsmpp (mgr.14162) 19 : cephadm [INF] Deploying daemon grafana.vm05 on vm05 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:22.156917+0000 mon.vm05 (mon.0) 136 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:22.156917+0000 mon.vm05 (mon.0) 136 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:22.427356+0000 mon.vm05 (mon.0) 137 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:22.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:22 vm05 bash[17520]: audit 2026-03-10T07:17:22.427356+0000 mon.vm05 (mon.0) 137 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:23.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:23 vm05 bash[17520]: cephadm 2026-03-10T07:17:22.157443+0000 mgr.vm05.wnsmpp (mgr.14162) 20 : cephadm [INF] Added host vm09 2026-03-10T07:17:23.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:23 vm05 bash[17520]: cephadm 2026-03-10T07:17:22.157443+0000 mgr.vm05.wnsmpp (mgr.14162) 20 : cephadm [INF] Added host vm09 2026-03-10T07:17:23.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:23 vm05 bash[17520]: cluster 2026-03-10T07:17:22.391902+0000 mgr.vm05.wnsmpp (mgr.14162) 21 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:23.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:23 vm05 bash[17520]: cluster 2026-03-10T07:17:22.391902+0000 mgr.vm05.wnsmpp (mgr.14162) 21 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:25.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:25 vm05 bash[17520]: cluster 2026-03-10T07:17:24.392085+0000 mgr.vm05.wnsmpp (mgr.14162) 22 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:25.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:25 vm05 bash[17520]: cluster 2026-03-10T07:17:24.392085+0000 mgr.vm05.wnsmpp (mgr.14162) 22 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:26.851 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:17:27.625 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:17:27.625 INFO:teuthology.orchestra.run.vm05.stdout:[{"addr": "192.168.123.105", "hostname": "vm05", "labels": [], "status": ""}, {"addr": "192.168.123.109", "hostname": "vm09", "labels": [], "status": ""}] 2026-03-10T07:17:27.791 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:27 vm05 bash[17520]: cluster 2026-03-10T07:17:26.392286+0000 mgr.vm05.wnsmpp (mgr.14162) 23 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:27.791 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:27 vm05 bash[17520]: cluster 2026-03-10T07:17:26.392286+0000 mgr.vm05.wnsmpp (mgr.14162) 23 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:27.792 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T07:17:27.792 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd crush tunables default 2026-03-10T07:17:28.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:28 vm05 bash[17520]: audit 2026-03-10T07:17:27.625959+0000 mgr.vm05.wnsmpp (mgr.14162) 24 : audit [DBG] from='client.14189 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:17:28.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:28 vm05 bash[17520]: audit 2026-03-10T07:17:27.625959+0000 mgr.vm05.wnsmpp (mgr.14162) 24 : audit [DBG] from='client.14189 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:17:29.946 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:29 vm05 bash[17520]: cluster 2026-03-10T07:17:28.392485+0000 mgr.vm05.wnsmpp (mgr.14162) 25 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:29.947 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:29 vm05 bash[17520]: cluster 2026-03-10T07:17:28.392485+0000 mgr.vm05.wnsmpp (mgr.14162) 25 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:31.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:31 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:31.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:31 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:31.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:31 vm05 bash[17520]: cluster 2026-03-10T07:17:30.392653+0000 mgr.vm05.wnsmpp (mgr.14162) 26 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:31.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:31 vm05 bash[17520]: cluster 2026-03-10T07:17:30.392653+0000 mgr.vm05.wnsmpp (mgr.14162) 26 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.681747+0000 mon.vm05 (mon.0) 138 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.681747+0000 mon.vm05 (mon.0) 138 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.685484+0000 mon.vm05 (mon.0) 139 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.685484+0000 mon.vm05 (mon.0) 139 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.687837+0000 mon.vm05 (mon.0) 140 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.687837+0000 mon.vm05 (mon.0) 140 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.689940+0000 mon.vm05 (mon.0) 141 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.689940+0000 mon.vm05 (mon.0) 141 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.692242+0000 mon.vm05 (mon.0) 142 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.692242+0000 mon.vm05 (mon.0) 142 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.695056+0000 mon.vm05 (mon.0) 143 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.695056+0000 mon.vm05 (mon.0) 143 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.697113+0000 mon.vm05 (mon.0) 144 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.697113+0000 mon.vm05 (mon.0) 144 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.698890+0000 mon.vm05 (mon.0) 145 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:31.698890+0000 mon.vm05 (mon.0) 145 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: cephadm 2026-03-10T07:17:31.895173+0000 mgr.vm05.wnsmpp (mgr.14162) 27 : cephadm [INF] Deploying daemon prometheus.vm05 on vm05 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: cephadm 2026-03-10T07:17:31.895173+0000 mgr.vm05.wnsmpp (mgr.14162) 27 : cephadm [INF] Deploying daemon prometheus.vm05 on vm05 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:32.431623+0000 mon.vm05 (mon.0) 146 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:32.738 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:32 vm05 bash[17520]: audit 2026-03-10T07:17:32.431623+0000 mon.vm05 (mon.0) 146 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:33.439 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:17:33.773 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:33 vm05 bash[17520]: cluster 2026-03-10T07:17:32.392823+0000 mgr.vm05.wnsmpp (mgr.14162) 28 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:33.773 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:33 vm05 bash[17520]: cluster 2026-03-10T07:17:32.392823+0000 mgr.vm05.wnsmpp (mgr.14162) 28 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:34.746 INFO:teuthology.orchestra.run.vm05.stderr:adjusted tunables profile to default 2026-03-10T07:17:34.865 INFO:tasks.cephadm:Adding mon.vm05 on vm05 2026-03-10T07:17:34.866 INFO:tasks.cephadm:Adding mon.vm09 on vm09 2026-03-10T07:17:34.866 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch apply mon '2;vm05:192.168.123.105=vm05;vm09:192.168.123.109=vm09' 2026-03-10T07:17:35.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:34 vm05 bash[17520]: audit 2026-03-10T07:17:33.825241+0000 mon.vm05 (mon.0) 147 : audit [INF] from='client.? 192.168.123.105:0/2378135439' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T07:17:35.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:34 vm05 bash[17520]: audit 2026-03-10T07:17:33.825241+0000 mon.vm05 (mon.0) 147 : audit [INF] from='client.? 192.168.123.105:0/2378135439' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T07:17:35.983 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T07:17:36.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:35 vm05 bash[17520]: cluster 2026-03-10T07:17:34.393025+0000 mgr.vm05.wnsmpp (mgr.14162) 29 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:36.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:35 vm05 bash[17520]: cluster 2026-03-10T07:17:34.393025+0000 mgr.vm05.wnsmpp (mgr.14162) 29 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:36.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:35 vm05 bash[17520]: audit 2026-03-10T07:17:34.745479+0000 mon.vm05 (mon.0) 148 : audit [INF] from='client.? 192.168.123.105:0/2378135439' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T07:17:36.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:35 vm05 bash[17520]: audit 2026-03-10T07:17:34.745479+0000 mon.vm05 (mon.0) 148 : audit [INF] from='client.? 192.168.123.105:0/2378135439' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T07:17:36.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:35 vm05 bash[17520]: cluster 2026-03-10T07:17:34.747287+0000 mon.vm05 (mon.0) 149 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:17:36.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:35 vm05 bash[17520]: cluster 2026-03-10T07:17:34.747287+0000 mon.vm05 (mon.0) 149 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:17:37.013 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T07:17:37.548 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mon update... 2026-03-10T07:17:37.610 DEBUG:teuthology.orchestra.run.vm09:mon.vm09> sudo journalctl -f -n 0 -u ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mon.vm09.service 2026-03-10T07:17:37.611 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T07:17:37.611 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph mon dump -f json 2026-03-10T07:17:38.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:37 vm05 bash[17520]: cluster 2026-03-10T07:17:36.393194+0000 mgr.vm05.wnsmpp (mgr.14162) 30 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:38.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:37 vm05 bash[17520]: cluster 2026-03-10T07:17:36.393194+0000 mgr.vm05.wnsmpp (mgr.14162) 30 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:38.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:37 vm05 bash[17520]: audit 2026-03-10T07:17:37.549048+0000 mon.vm05 (mon.0) 150 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:38.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:37 vm05 bash[17520]: audit 2026-03-10T07:17:37.549048+0000 mon.vm05 (mon.0) 150 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:38.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:38.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:38.773 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T07:17:39.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 bash[17520]: audit 2026-03-10T07:17:37.479321+0000 mgr.vm05.wnsmpp (mgr.14162) 31 : audit [DBG] from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm05:192.168.123.105=vm05;vm09:192.168.123.109=vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:17:39.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 bash[17520]: audit 2026-03-10T07:17:37.479321+0000 mgr.vm05.wnsmpp (mgr.14162) 31 : audit [DBG] from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm05:192.168.123.105=vm05;vm09:192.168.123.109=vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:17:39.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 bash[17520]: cephadm 2026-03-10T07:17:37.480707+0000 mgr.vm05.wnsmpp (mgr.14162) 32 : cephadm [INF] Saving service mon spec with placement vm05:192.168.123.105=vm05;vm09:192.168.123.109=vm09;count:2 2026-03-10T07:17:39.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 bash[17520]: cephadm 2026-03-10T07:17:37.480707+0000 mgr.vm05.wnsmpp (mgr.14162) 32 : cephadm [INF] Saving service mon spec with placement vm05:192.168.123.105=vm05;vm09:192.168.123.109=vm09;count:2 2026-03-10T07:17:39.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 bash[17520]: audit 2026-03-10T07:17:38.350031+0000 mon.vm05 (mon.0) 151 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:39.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 bash[17520]: audit 2026-03-10T07:17:38.350031+0000 mon.vm05 (mon.0) 151 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:39.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 bash[17520]: audit 2026-03-10T07:17:38.355761+0000 mon.vm05 (mon.0) 152 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:39.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 bash[17520]: audit 2026-03-10T07:17:38.355761+0000 mon.vm05 (mon.0) 152 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:39.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 bash[17520]: audit 2026-03-10T07:17:38.358967+0000 mon.vm05 (mon.0) 153 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:39.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 bash[17520]: audit 2026-03-10T07:17:38.358967+0000 mon.vm05 (mon.0) 153 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:39.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 bash[17520]: audit 2026-03-10T07:17:38.360913+0000 mon.vm05 (mon.0) 154 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T07:17:39.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:38 vm05 bash[17520]: audit 2026-03-10T07:17:38.360913+0000 mon.vm05 (mon.0) 154 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T07:17:39.795 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T07:17:40.084 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T07:17:40.085 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T07:17:40.085 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"f0f57d3c-1c50-11f1-837e-f755e850132e","modified":"2026-03-10T07:16:38.694276Z","created":"2026-03-10T07:16:38.694276Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T07:17:40.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:39 vm05 bash[17520]: cluster 2026-03-10T07:17:38.393446+0000 mgr.vm05.wnsmpp (mgr.14162) 33 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:40.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:39 vm05 bash[17520]: cluster 2026-03-10T07:17:38.393446+0000 mgr.vm05.wnsmpp (mgr.14162) 33 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:17:40.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:39 vm05 bash[17520]: audit 2026-03-10T07:17:39.362586+0000 mon.vm05 (mon.0) 155 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T07:17:40.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:39 vm05 bash[17520]: audit 2026-03-10T07:17:39.362586+0000 mon.vm05 (mon.0) 155 : audit [INF] from='mgr.14162 192.168.123.105:0/3659129071' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T07:17:40.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:39 vm05 bash[17520]: cluster 2026-03-10T07:17:39.366440+0000 mon.vm05 (mon.0) 156 : cluster [DBG] mgrmap e13: vm05.wnsmpp(active, since 36s) 2026-03-10T07:17:40.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:39 vm05 bash[17520]: cluster 2026-03-10T07:17:39.366440+0000 mon.vm05 (mon.0) 156 : cluster [DBG] mgrmap e13: vm05.wnsmpp(active, since 36s) 2026-03-10T07:17:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:40 vm05 bash[17520]: audit 2026-03-10T07:17:40.085806+0000 mon.vm05 (mon.0) 157 : audit [DBG] from='client.? 192.168.123.109:0/2836448548' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:17:41.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:40 vm05 bash[17520]: audit 2026-03-10T07:17:40.085806+0000 mon.vm05 (mon.0) 157 : audit [DBG] from='client.? 192.168.123.109:0/2836448548' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:17:41.401 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T07:17:41.401 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph mon dump -f json 2026-03-10T07:17:42.514 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T07:17:42.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: cluster 2026-03-10T07:17:42.623701+0000 mon.vm05 (mon.0) 158 : cluster [INF] Active manager daemon vm05.wnsmpp restarted 2026-03-10T07:17:42.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: cluster 2026-03-10T07:17:42.623701+0000 mon.vm05 (mon.0) 158 : cluster [INF] Active manager daemon vm05.wnsmpp restarted 2026-03-10T07:17:42.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: cluster 2026-03-10T07:17:42.623997+0000 mon.vm05 (mon.0) 159 : cluster [INF] Activating manager daemon vm05.wnsmpp 2026-03-10T07:17:42.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: cluster 2026-03-10T07:17:42.623997+0000 mon.vm05 (mon.0) 159 : cluster [INF] Activating manager daemon vm05.wnsmpp 2026-03-10T07:17:42.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: cluster 2026-03-10T07:17:42.628709+0000 mon.vm05 (mon.0) 160 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: cluster 2026-03-10T07:17:42.628709+0000 mon.vm05 (mon.0) 160 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: cluster 2026-03-10T07:17:42.628829+0000 mon.vm05 (mon.0) 161 : cluster [DBG] mgrmap e14: vm05.wnsmpp(active, starting, since 0.00499823s) 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: cluster 2026-03-10T07:17:42.628829+0000 mon.vm05 (mon.0) 161 : cluster [DBG] mgrmap e14: vm05.wnsmpp(active, starting, since 0.00499823s) 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.629697+0000 mon.vm05 (mon.0) 162 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.629697+0000 mon.vm05 (mon.0) 162 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.630695+0000 mon.vm05 (mon.0) 163 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr metadata", "who": "vm05.wnsmpp", "id": "vm05.wnsmpp"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.630695+0000 mon.vm05 (mon.0) 163 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr metadata", "who": "vm05.wnsmpp", "id": "vm05.wnsmpp"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.631476+0000 mon.vm05 (mon.0) 164 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.631476+0000 mon.vm05 (mon.0) 164 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.631570+0000 mon.vm05 (mon.0) 165 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.631570+0000 mon.vm05 (mon.0) 165 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.631703+0000 mon.vm05 (mon.0) 166 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.631703+0000 mon.vm05 (mon.0) 166 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: cluster 2026-03-10T07:17:42.637600+0000 mon.vm05 (mon.0) 167 : cluster [INF] Manager daemon vm05.wnsmpp is now available 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: cluster 2026-03-10T07:17:42.637600+0000 mon.vm05 (mon.0) 167 : cluster [INF] Manager daemon vm05.wnsmpp is now available 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.654618+0000 mon.vm05 (mon.0) 168 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.654618+0000 mon.vm05 (mon.0) 168 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.662377+0000 mon.vm05 (mon.0) 169 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.662377+0000 mon.vm05 (mon.0) 169 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.671991+0000 mon.vm05 (mon.0) 170 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:17:42.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:42 vm05 bash[17520]: audit 2026-03-10T07:17:42.671991+0000 mon.vm05 (mon.0) 170 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:17:43.543 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T07:17:43.819 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T07:17:43.820 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"f0f57d3c-1c50-11f1-837e-f755e850132e","modified":"2026-03-10T07:16:38.694276Z","created":"2026-03-10T07:16:38.694276Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T07:17:43.820 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T07:17:43.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:43 vm05 bash[17520]: audit 2026-03-10T07:17:42.680128+0000 mon.vm05 (mon.0) 171 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:17:43.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:43 vm05 bash[17520]: audit 2026-03-10T07:17:42.680128+0000 mon.vm05 (mon.0) 171 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:17:43.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:43 vm05 bash[17520]: audit 2026-03-10T07:17:42.682548+0000 mon.vm05 (mon.0) 172 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/trash_purge_schedule"}]: dispatch 2026-03-10T07:17:43.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:43 vm05 bash[17520]: audit 2026-03-10T07:17:42.682548+0000 mon.vm05 (mon.0) 172 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.wnsmpp/trash_purge_schedule"}]: dispatch 2026-03-10T07:17:43.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:43 vm05 bash[17520]: audit 2026-03-10T07:17:43.135799+0000 mon.vm05 (mon.0) 173 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:43.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:43 vm05 bash[17520]: audit 2026-03-10T07:17:43.135799+0000 mon.vm05 (mon.0) 173 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:43.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:43 vm05 bash[17520]: cluster 2026-03-10T07:17:43.632510+0000 mon.vm05 (mon.0) 174 : cluster [DBG] mgrmap e15: vm05.wnsmpp(active, since 1.00868s) 2026-03-10T07:17:43.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:43 vm05 bash[17520]: cluster 2026-03-10T07:17:43.632510+0000 mon.vm05 (mon.0) 174 : cluster [DBG] mgrmap e15: vm05.wnsmpp(active, since 1.00868s) 2026-03-10T07:17:44.889 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T07:17:44.890 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph mon dump -f json 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: cephadm 2026-03-10T07:17:43.737911+0000 mgr.vm05.wnsmpp (mgr.14195) 1 : cephadm [INF] [10/Mar/2026:07:17:43] ENGINE Bus STARTING 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: cephadm 2026-03-10T07:17:43.737911+0000 mgr.vm05.wnsmpp (mgr.14195) 1 : cephadm [INF] [10/Mar/2026:07:17:43] ENGINE Bus STARTING 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: audit 2026-03-10T07:17:43.820847+0000 mon.vm05 (mon.0) 175 : audit [DBG] from='client.? 192.168.123.109:0/2700597816' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: audit 2026-03-10T07:17:43.820847+0000 mon.vm05 (mon.0) 175 : audit [DBG] from='client.? 192.168.123.109:0/2700597816' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: cephadm 2026-03-10T07:17:43.839503+0000 mgr.vm05.wnsmpp (mgr.14195) 2 : cephadm [INF] [10/Mar/2026:07:17:43] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: cephadm 2026-03-10T07:17:43.839503+0000 mgr.vm05.wnsmpp (mgr.14195) 2 : cephadm [INF] [10/Mar/2026:07:17:43] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: cephadm 2026-03-10T07:17:43.950260+0000 mgr.vm05.wnsmpp (mgr.14195) 3 : cephadm [INF] [10/Mar/2026:07:17:43] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: cephadm 2026-03-10T07:17:43.950260+0000 mgr.vm05.wnsmpp (mgr.14195) 3 : cephadm [INF] [10/Mar/2026:07:17:43] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: cephadm 2026-03-10T07:17:43.950329+0000 mgr.vm05.wnsmpp (mgr.14195) 4 : cephadm [INF] [10/Mar/2026:07:17:43] ENGINE Bus STARTED 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: cephadm 2026-03-10T07:17:43.950329+0000 mgr.vm05.wnsmpp (mgr.14195) 4 : cephadm [INF] [10/Mar/2026:07:17:43] ENGINE Bus STARTED 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: cephadm 2026-03-10T07:17:43.950777+0000 mgr.vm05.wnsmpp (mgr.14195) 5 : cephadm [INF] [10/Mar/2026:07:17:43] ENGINE Client ('192.168.123.105', 57978) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: cephadm 2026-03-10T07:17:43.950777+0000 mgr.vm05.wnsmpp (mgr.14195) 5 : cephadm [INF] [10/Mar/2026:07:17:43] ENGINE Client ('192.168.123.105', 57978) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: audit 2026-03-10T07:17:44.422599+0000 mon.vm05 (mon.0) 176 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:44.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:44 vm05 bash[17520]: audit 2026-03-10T07:17:44.422599+0000 mon.vm05 (mon.0) 176 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:46.010 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T07:17:46.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:46 vm05 bash[17520]: audit 2026-03-10T07:17:45.013749+0000 mon.vm05 (mon.0) 177 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:46.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:46 vm05 bash[17520]: audit 2026-03-10T07:17:45.013749+0000 mon.vm05 (mon.0) 177 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:46.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:46 vm05 bash[17520]: cluster 2026-03-10T07:17:45.425410+0000 mon.vm05 (mon.0) 178 : cluster [DBG] mgrmap e16: vm05.wnsmpp(active, since 2s) 2026-03-10T07:17:46.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:46 vm05 bash[17520]: cluster 2026-03-10T07:17:45.425410+0000 mon.vm05 (mon.0) 178 : cluster [DBG] mgrmap e16: vm05.wnsmpp(active, since 2s) 2026-03-10T07:17:48.227 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T07:17:48.883 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T07:17:48.883 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"f0f57d3c-1c50-11f1-837e-f755e850132e","modified":"2026-03-10T07:16:38.694276Z","created":"2026-03-10T07:16:38.694276Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T07:17:48.883 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T07:17:49.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:47.830506+0000 mon.vm05 (mon.0) 179 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:49.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:47.830506+0000 mon.vm05 (mon.0) 179 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:49.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:47.881175+0000 mon.vm05 (mon.0) 180 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:49.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:47.881175+0000 mon.vm05 (mon.0) 180 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:49.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:47.995387+0000 mon.vm05 (mon.0) 181 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:49.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:47.995387+0000 mon.vm05 (mon.0) 181 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:49.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:48.087196+0000 mon.vm05 (mon.0) 182 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:49.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:48.087196+0000 mon.vm05 (mon.0) 182 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:49.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:48.088142+0000 mon.vm05 (mon.0) 183 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:17:49.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:48.088142+0000 mon.vm05 (mon.0) 183 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:17:49.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:48.317093+0000 mon.vm05 (mon.0) 184 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:49.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:48.317093+0000 mon.vm05 (mon.0) 184 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:49.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:48.465395+0000 mon.vm05 (mon.0) 185 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:49.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:48 vm05 bash[17520]: audit 2026-03-10T07:17:48.465395+0000 mon.vm05 (mon.0) 185 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:49.953 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T07:17:49.953 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph mon dump -f json 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:48.884467+0000 mon.vm05 (mon.0) 186 : audit [DBG] from='client.? 192.168.123.109:0/2115934285' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:48.884467+0000 mon.vm05 (mon.0) 186 : audit [DBG] from='client.? 192.168.123.109:0/2115934285' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.100544+0000 mon.vm05 (mon.0) 187 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.100544+0000 mon.vm05 (mon.0) 187 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.104345+0000 mon.vm05 (mon.0) 188 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.104345+0000 mon.vm05 (mon.0) 188 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.105284+0000 mon.vm05 (mon.0) 189 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.105284+0000 mon.vm05 (mon.0) 189 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.106017+0000 mon.vm05 (mon.0) 190 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.106017+0000 mon.vm05 (mon.0) 190 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.106390+0000 mon.vm05 (mon.0) 191 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.106390+0000 mon.vm05 (mon.0) 191 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.107132+0000 mgr.vm05.wnsmpp (mgr.14195) 6 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.107132+0000 mgr.vm05.wnsmpp (mgr.14195) 6 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.107255+0000 mgr.vm05.wnsmpp (mgr.14195) 7 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.107255+0000 mgr.vm05.wnsmpp (mgr.14195) 7 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.142965+0000 mgr.vm05.wnsmpp (mgr.14195) 8 : cephadm [INF] Updating vm05:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.142965+0000 mgr.vm05.wnsmpp (mgr.14195) 8 : cephadm [INF] Updating vm05:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.148414+0000 mgr.vm05.wnsmpp (mgr.14195) 9 : cephadm [INF] Updating vm09:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.148414+0000 mgr.vm05.wnsmpp (mgr.14195) 9 : cephadm [INF] Updating vm09:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.179121+0000 mgr.vm05.wnsmpp (mgr.14195) 10 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.179121+0000 mgr.vm05.wnsmpp (mgr.14195) 10 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.185853+0000 mgr.vm05.wnsmpp (mgr.14195) 11 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.185853+0000 mgr.vm05.wnsmpp (mgr.14195) 11 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.218432+0000 mgr.vm05.wnsmpp (mgr.14195) 12 : cephadm [INF] Updating vm05:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.client.admin.keyring 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.218432+0000 mgr.vm05.wnsmpp (mgr.14195) 12 : cephadm [INF] Updating vm05:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.client.admin.keyring 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.228500+0000 mgr.vm05.wnsmpp (mgr.14195) 13 : cephadm [INF] Updating vm09:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.client.admin.keyring 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.228500+0000 mgr.vm05.wnsmpp (mgr.14195) 13 : cephadm [INF] Updating vm09:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.client.admin.keyring 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.262124+0000 mon.vm05 (mon.0) 192 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.262124+0000 mon.vm05 (mon.0) 192 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.265108+0000 mon.vm05 (mon.0) 193 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.265108+0000 mon.vm05 (mon.0) 193 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.267519+0000 mon.vm05 (mon.0) 194 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.267519+0000 mon.vm05 (mon.0) 194 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.269480+0000 mon.vm05 (mon.0) 195 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.269480+0000 mon.vm05 (mon.0) 195 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.272064+0000 mon.vm05 (mon.0) 196 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.272064+0000 mon.vm05 (mon.0) 196 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.273040+0000 mon.vm05 (mon.0) 197 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.273040+0000 mon.vm05 (mon.0) 197 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.274006+0000 mon.vm05 (mon.0) 198 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.274006+0000 mon.vm05 (mon.0) 198 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T07:17:50.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.275349+0000 mon.vm05 (mon.0) 199 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:50.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: audit 2026-03-10T07:17:49.275349+0000 mon.vm05 (mon.0) 199 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:50.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.275916+0000 mgr.vm05.wnsmpp (mgr.14195) 14 : cephadm [INF] Deploying daemon ceph-exporter.vm09 on vm09 2026-03-10T07:17:50.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:49 vm05 bash[17520]: cephadm 2026-03-10T07:17:49.275916+0000 mgr.vm05.wnsmpp (mgr.14195) 14 : cephadm [INF] Deploying daemon ceph-exporter.vm09 on vm09 2026-03-10T07:17:51.098 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:17:51.731 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T07:17:51.731 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"f0f57d3c-1c50-11f1-837e-f755e850132e","modified":"2026-03-10T07:16:38.694276Z","created":"2026-03-10T07:16:38.694276Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T07:17:51.731 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T07:17:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.623831+0000 mon.vm05 (mon.0) 200 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.623831+0000 mon.vm05 (mon.0) 200 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.627271+0000 mon.vm05 (mon.0) 201 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.627271+0000 mon.vm05 (mon.0) 201 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.629930+0000 mon.vm05 (mon.0) 202 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.629930+0000 mon.vm05 (mon.0) 202 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.632153+0000 mon.vm05 (mon.0) 203 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.632153+0000 mon.vm05 (mon.0) 203 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.633599+0000 mon.vm05 (mon.0) 204 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:17:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.633599+0000 mon.vm05 (mon.0) 204 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:17:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.634773+0000 mon.vm05 (mon.0) 205 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.634773+0000 mon.vm05 (mon.0) 205 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.636864+0000 mon.vm05 (mon.0) 206 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:50.636864+0000 mon.vm05 (mon.0) 206 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: cephadm 2026-03-10T07:17:50.637415+0000 mgr.vm05.wnsmpp (mgr.14195) 15 : cephadm [INF] Deploying daemon crash.vm09 on vm09 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: cephadm 2026-03-10T07:17:50.637415+0000 mgr.vm05.wnsmpp (mgr.14195) 15 : cephadm [INF] Deploying daemon crash.vm09 on vm09 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:51.451420+0000 mon.vm05 (mon.0) 207 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:51.451420+0000 mon.vm05 (mon.0) 207 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:51.455311+0000 mon.vm05 (mon.0) 208 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:51.455311+0000 mon.vm05 (mon.0) 208 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:51.458436+0000 mon.vm05 (mon.0) 209 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:51.458436+0000 mon.vm05 (mon.0) 209 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:51.461766+0000 mon.vm05 (mon.0) 210 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:51 vm05 bash[17520]: audit 2026-03-10T07:17:51.461766+0000 mon.vm05 (mon.0) 210 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:52.809 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T07:17:52.809 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph mon dump -f json 2026-03-10T07:17:52.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: cephadm 2026-03-10T07:17:51.462823+0000 mgr.vm05.wnsmpp (mgr.14195) 16 : cephadm [INF] Deploying daemon node-exporter.vm09 on vm09 2026-03-10T07:17:52.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: cephadm 2026-03-10T07:17:51.462823+0000 mgr.vm05.wnsmpp (mgr.14195) 16 : cephadm [INF] Deploying daemon node-exporter.vm09 on vm09 2026-03-10T07:17:52.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:51.732095+0000 mon.vm05 (mon.0) 211 : audit [DBG] from='client.? 192.168.123.109:0/3067035348' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:17:52.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:51.732095+0000 mon.vm05 (mon.0) 211 : audit [DBG] from='client.? 192.168.123.109:0/3067035348' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:17:52.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.155830+0000 mon.vm05 (mon.0) 212 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:52.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.155830+0000 mon.vm05 (mon.0) 212 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:52.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.158850+0000 mon.vm05 (mon.0) 213 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.158850+0000 mon.vm05 (mon.0) 213 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.161617+0000 mon.vm05 (mon.0) 214 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.161617+0000 mon.vm05 (mon.0) 214 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.164359+0000 mon.vm05 (mon.0) 215 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.164359+0000 mon.vm05 (mon.0) 215 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.165742+0000 mon.vm05 (mon.0) 216 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.rfdvwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.165742+0000 mon.vm05 (mon.0) 216 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.rfdvwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.167050+0000 mon.vm05 (mon.0) 217 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm09.rfdvwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.167050+0000 mon.vm05 (mon.0) 217 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm09.rfdvwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.169322+0000 mon.vm05 (mon.0) 218 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.169322+0000 mon.vm05 (mon.0) 218 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.169833+0000 mon.vm05 (mon.0) 219 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: audit 2026-03-10T07:17:52.169833+0000 mon.vm05 (mon.0) 219 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: cephadm 2026-03-10T07:17:52.170369+0000 mgr.vm05.wnsmpp (mgr.14195) 17 : cephadm [INF] Deploying daemon mgr.vm09.rfdvwa on vm09 2026-03-10T07:17:52.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:52 vm05 bash[17520]: cephadm 2026-03-10T07:17:52.170369+0000 mgr.vm05.wnsmpp (mgr.14195) 17 : cephadm [INF] Deploying daemon mgr.vm09.rfdvwa on vm09 2026-03-10T07:17:53.268 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:53 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:53.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.661030+0000 mon.vm05 (mon.0) 220 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:53.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.661030+0000 mon.vm05 (mon.0) 220 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:53.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.980074+0000 mon.vm05 (mon.0) 221 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:53.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.980074+0000 mon.vm05 (mon.0) 221 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:53.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.982699+0000 mon.vm05 (mon.0) 222 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:53.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.982699+0000 mon.vm05 (mon.0) 222 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:53.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.985151+0000 mon.vm05 (mon.0) 223 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:53.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.985151+0000 mon.vm05 (mon.0) 223 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:53.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.987234+0000 mon.vm05 (mon.0) 224 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:53.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.987234+0000 mon.vm05 (mon.0) 224 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:53.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.988130+0000 mon.vm05 (mon.0) 225 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:17:53.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.988130+0000 mon.vm05 (mon.0) 225 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:17:53.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.988663+0000 mon.vm05 (mon.0) 226 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:53.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: audit 2026-03-10T07:17:52.988663+0000 mon.vm05 (mon.0) 226 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:53.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: cephadm 2026-03-10T07:17:52.989193+0000 mgr.vm05.wnsmpp (mgr.14195) 18 : cephadm [INF] Deploying daemon mon.vm09 on vm09 2026-03-10T07:17:53.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:53 vm05 bash[17520]: cephadm 2026-03-10T07:17:52.989193+0000 mgr.vm05.wnsmpp (mgr.14195) 18 : cephadm [INF] Deploying daemon mon.vm09 on vm09 2026-03-10T07:17:54.089 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:53 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:54.089 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:53 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:54.089 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:53 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:54.089 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:17:54.385 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 systemd[1]: Started Ceph mon.vm09 for f0f57d3c-1c50-11f1-837e-f755e850132e. 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.379+0000 7f9638355d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.379+0000 7f9638355d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.379+0000 7f9638355d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 0 load: jerasure load: lrc 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Git sha 0 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: DB SUMMARY 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: DB Session ID: NFKCEP8E2U3HC9ETEL5M 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-vm09/store.db dir, Total Num: 0, files: 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm09/store.db: 000004.log size: 511 ; 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T07:17:54.660 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.env: 0x55b4d4903dc0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.info_log: 0x55b4df7bcde0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.db_log_dir: 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.wal_dir: 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.write_buffer_manager: 0x55b4df7c1900 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T07:17:54.661 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.row_cache: None 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.wal_filter: None 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T07:17:54.662 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Compression algorithms supported: 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: kZSTD supported: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm09/store.db/MANIFEST-000005 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.merge_operator: 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b4df7bc5c0) 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: cache_index_and_filter_blocks: 1 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: pin_top_level_index_and_filter: 1 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: index_type: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: data_block_index_type: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: index_shortening: 1 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: checksum: 4 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: no_block_cache: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: block_cache: 0x55b4df7e3350 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: block_cache_name: BinnedLRUCache 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: block_cache_options: 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: capacity : 536870912 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: num_shard_bits : 4 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: strict_capacity_limit : 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: high_pri_pool_ratio: 0.000 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: block_cache_compressed: (nil) 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: persistent_cache: (nil) 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: block_size: 4096 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: block_size_deviation: 10 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: block_restart_interval: 16 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: index_block_restart_interval: 1 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: metadata_block_size: 4096 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: partition_filters: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: use_delta_encoding: 1 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: filter_policy: bloomfilter 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: whole_key_filtering: 1 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: verify_compression: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: read_amp_bytes_per_bit: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: format_version: 5 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: enable_index_compression: 1 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: block_align: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: max_auto_readahead_size: 262144 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: prepopulate_block_cache: 0 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: initial_auto_readahead_size: 8192 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: num_file_reads_for_auto_readahead: 2 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T07:17:54.663 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.num_levels: 7 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T07:17:54.664 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm09/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 25051b47-7c52-4825-8512-9cf013f49cea 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773127074387033, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.383+0000 7f9638355d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.399+0000 7f9638355d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773127074400524, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773127074, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25051b47-7c52-4825-8512-9cf013f49cea", "db_session_id": "NFKCEP8E2U3HC9ETEL5M", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.399+0000 7f9638355d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773127074400616, "job": 1, "event": "recovery_finished"} 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.399+0000 7f9638355d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.403+0000 7f9638355d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm09/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.403+0000 7f9638355d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b4df7e4e00 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.403+0000 7f9638355d80 4 rocksdb: DB pointer 0x55b4df8f2000 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.403+0000 7f9638355d80 0 mon.vm09 does not exist in monmap, will attempt to join an existing cluster 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.403+0000 7f9638355d80 0 using public_addr v2:192.168.123.109:0/0 -> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.403+0000 7f962e11f640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.403+0000 7f962e11f640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: ** DB Stats ** 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: ** Compaction Stats [default] ** 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.013 0 0 0.0 0.0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.013 0 0 0.0 0.0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.013 0 0 0.0 0.0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: ** Compaction Stats [default] ** 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.013 0 0 0.0 0.0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: AddFile(Keys): cumulative 0, interval 0 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Cumulative compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Interval compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T07:17:54.665 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Block cache BinnedLRUCache@0x55b4df7e3350#7 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-06 secs_since: 0 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: ** File Read Latency Histogram By Level [default] ** 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.407+0000 7f9638355d80 0 starting mon.vm09 rank -1 at public addrs [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] at bind addrs [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon_data /var/lib/ceph/mon/ceph-vm09 fsid f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.407+0000 7f9638355d80 1 mon.vm09@-1(???) e0 preinit fsid f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.479+0000 7f9631125640 0 mon.vm09@-1(synchronizing).mds e1 new map 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.479+0000 7f9631125640 0 mon.vm09@-1(synchronizing).mds e1 print_map 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: e1 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: btime 2026-03-10T07:16:39:906868+0000 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: legacy client fscid: -1 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: No filesystems configured 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.479+0000 7f9631125640 1 mon.vm09@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.479+0000 7f9631125640 1 mon.vm09@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.479+0000 7f9631125640 1 mon.vm09@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.479+0000 7f9631125640 1 mon.vm09@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.479+0000 7f9631125640 1 mon.vm09@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.479+0000 7f9631125640 1 mon.vm09@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.483+0000 7f9631125640 1 mon.vm09@-1(synchronizing).osd e5 e5: 0 total, 0 up, 0 in 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.483+0000 7f9631125640 0 mon.vm09@-1(synchronizing).osd e5 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.483+0000 7f9631125640 0 mon.vm09@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.483+0000 7f9631125640 0 mon.vm09@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: debug 2026-03-10T07:17:54.483+0000 7f9631125640 0 mon.vm09@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.623831+0000 mon.vm05 (mon.0) 200 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.623831+0000 mon.vm05 (mon.0) 200 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.627271+0000 mon.vm05 (mon.0) 201 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.627271+0000 mon.vm05 (mon.0) 201 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.629930+0000 mon.vm05 (mon.0) 202 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.629930+0000 mon.vm05 (mon.0) 202 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.632153+0000 mon.vm05 (mon.0) 203 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.632153+0000 mon.vm05 (mon.0) 203 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.633599+0000 mon.vm05 (mon.0) 204 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.633599+0000 mon.vm05 (mon.0) 204 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.634773+0000 mon.vm05 (mon.0) 205 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.634773+0000 mon.vm05 (mon.0) 205 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.636864+0000 mon.vm05 (mon.0) 206 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:54.666 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:50.636864+0000 mon.vm05 (mon.0) 206 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: cephadm 2026-03-10T07:17:50.637415+0000 mgr.vm05.wnsmpp (mgr.14195) 15 : cephadm [INF] Deploying daemon crash.vm09 on vm09 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: cephadm 2026-03-10T07:17:50.637415+0000 mgr.vm05.wnsmpp (mgr.14195) 15 : cephadm [INF] Deploying daemon crash.vm09 on vm09 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:51.451420+0000 mon.vm05 (mon.0) 207 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:51.451420+0000 mon.vm05 (mon.0) 207 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:51.455311+0000 mon.vm05 (mon.0) 208 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:51.455311+0000 mon.vm05 (mon.0) 208 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:51.458436+0000 mon.vm05 (mon.0) 209 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:51.458436+0000 mon.vm05 (mon.0) 209 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:51.461766+0000 mon.vm05 (mon.0) 210 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:51.461766+0000 mon.vm05 (mon.0) 210 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: cephadm 2026-03-10T07:17:51.462823+0000 mgr.vm05.wnsmpp (mgr.14195) 16 : cephadm [INF] Deploying daemon node-exporter.vm09 on vm09 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: cephadm 2026-03-10T07:17:51.462823+0000 mgr.vm05.wnsmpp (mgr.14195) 16 : cephadm [INF] Deploying daemon node-exporter.vm09 on vm09 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:51.732095+0000 mon.vm05 (mon.0) 211 : audit [DBG] from='client.? 192.168.123.109:0/3067035348' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:51.732095+0000 mon.vm05 (mon.0) 211 : audit [DBG] from='client.? 192.168.123.109:0/3067035348' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.155830+0000 mon.vm05 (mon.0) 212 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.155830+0000 mon.vm05 (mon.0) 212 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.158850+0000 mon.vm05 (mon.0) 213 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.158850+0000 mon.vm05 (mon.0) 213 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.161617+0000 mon.vm05 (mon.0) 214 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.161617+0000 mon.vm05 (mon.0) 214 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.164359+0000 mon.vm05 (mon.0) 215 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.164359+0000 mon.vm05 (mon.0) 215 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.165742+0000 mon.vm05 (mon.0) 216 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.rfdvwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.165742+0000 mon.vm05 (mon.0) 216 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.rfdvwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.167050+0000 mon.vm05 (mon.0) 217 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm09.rfdvwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.167050+0000 mon.vm05 (mon.0) 217 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm09.rfdvwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.169322+0000 mon.vm05 (mon.0) 218 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.169322+0000 mon.vm05 (mon.0) 218 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.169833+0000 mon.vm05 (mon.0) 219 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.169833+0000 mon.vm05 (mon.0) 219 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: cephadm 2026-03-10T07:17:52.170369+0000 mgr.vm05.wnsmpp (mgr.14195) 17 : cephadm [INF] Deploying daemon mgr.vm09.rfdvwa on vm09 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: cephadm 2026-03-10T07:17:52.170369+0000 mgr.vm05.wnsmpp (mgr.14195) 17 : cephadm [INF] Deploying daemon mgr.vm09.rfdvwa on vm09 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.661030+0000 mon.vm05 (mon.0) 220 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.661030+0000 mon.vm05 (mon.0) 220 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.980074+0000 mon.vm05 (mon.0) 221 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.980074+0000 mon.vm05 (mon.0) 221 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.982699+0000 mon.vm05 (mon.0) 222 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.982699+0000 mon.vm05 (mon.0) 222 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.985151+0000 mon.vm05 (mon.0) 223 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.985151+0000 mon.vm05 (mon.0) 223 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.987234+0000 mon.vm05 (mon.0) 224 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.987234+0000 mon.vm05 (mon.0) 224 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.988130+0000 mon.vm05 (mon.0) 225 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.988130+0000 mon.vm05 (mon.0) 225 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.988663+0000 mon.vm05 (mon.0) 226 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: audit 2026-03-10T07:17:52.988663+0000 mon.vm05 (mon.0) 226 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: cephadm 2026-03-10T07:17:52.989193+0000 mgr.vm05.wnsmpp (mgr.14195) 18 : cephadm [INF] Deploying daemon mon.vm09 on vm09 2026-03-10T07:17:54.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:54 vm09 bash[21099]: cephadm 2026-03-10T07:17:52.989193+0000 mgr.vm05.wnsmpp (mgr.14195) 18 : cephadm [INF] Deploying daemon mon.vm09 on vm09 2026-03-10T07:17:59.448 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm09/config 2026-03-10T07:17:59.864 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T07:17:59.864 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":2,"fsid":"f0f57d3c-1c50-11f1-837e-f755e850132e","modified":"2026-03-10T07:17:54.507938Z","created":"2026-03-10T07:16:38.694276Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"vm09","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T07:17:59.864 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 2 2026-03-10T07:17:59.876 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:54.510932+0000 mon.vm05 (mon.0) 234 : cluster [INF] mon.vm05 calling monitor election 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:54.510932+0000 mon.vm05 (mon.0) 234 : cluster [INF] mon.vm05 calling monitor election 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:54.511897+0000 mon.vm05 (mon.0) 235 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:54.511897+0000 mon.vm05 (mon.0) 235 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:54.511936+0000 mon.vm05 (mon.0) 236 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:54.511936+0000 mon.vm05 (mon.0) 236 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:55.499540+0000 mon.vm05 (mon.0) 237 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:55.499540+0000 mon.vm05 (mon.0) 237 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:56.497973+0000 mon.vm09 (mon.1) 1 : cluster [INF] mon.vm09 calling monitor election 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:56.497973+0000 mon.vm09 (mon.1) 1 : cluster [INF] mon.vm09 calling monitor election 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:56.499603+0000 mon.vm05 (mon.0) 238 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:56.499603+0000 mon.vm05 (mon.0) 238 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:56.911680+0000 mon.vm05 (mon.0) 239 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.rfdvwa/crt"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:56.911680+0000 mon.vm05 (mon.0) 239 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.rfdvwa/crt"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:57.499766+0000 mon.vm05 (mon.0) 240 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:57.499766+0000 mon.vm05 (mon.0) 240 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:57.672558+0000 mon.vm05 (mon.0) 241 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:57.672558+0000 mon.vm05 (mon.0) 241 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:58.499912+0000 mon.vm05 (mon.0) 242 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:58.499912+0000 mon.vm05 (mon.0) 242 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:59.500503+0000 mon.vm05 (mon.0) 243 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:59.500503+0000 mon.vm05 (mon.0) 243 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.515553+0000 mon.vm05 (mon.0) 244 : cluster [INF] mon.vm05 is new leader, mons vm05,vm09 in quorum (ranks 0,1) 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.515553+0000 mon.vm05 (mon.0) 244 : cluster [INF] mon.vm05 is new leader, mons vm05,vm09 in quorum (ranks 0,1) 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519438+0000 mon.vm05 (mon.0) 245 : cluster [DBG] monmap epoch 2 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519438+0000 mon.vm05 (mon.0) 245 : cluster [DBG] monmap epoch 2 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519459+0000 mon.vm05 (mon.0) 246 : cluster [DBG] fsid f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519459+0000 mon.vm05 (mon.0) 246 : cluster [DBG] fsid f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519471+0000 mon.vm05 (mon.0) 247 : cluster [DBG] last_changed 2026-03-10T07:17:54.507938+0000 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519471+0000 mon.vm05 (mon.0) 247 : cluster [DBG] last_changed 2026-03-10T07:17:54.507938+0000 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519480+0000 mon.vm05 (mon.0) 248 : cluster [DBG] created 2026-03-10T07:16:38.694276+0000 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519480+0000 mon.vm05 (mon.0) 248 : cluster [DBG] created 2026-03-10T07:16:38.694276+0000 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519489+0000 mon.vm05 (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519489+0000 mon.vm05 (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519498+0000 mon.vm05 (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519498+0000 mon.vm05 (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519508+0000 mon.vm05 (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519508+0000 mon.vm05 (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519516+0000 mon.vm05 (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519516+0000 mon.vm05 (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519908+0000 mon.vm05 (mon.0) 253 : cluster [DBG] fsmap 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519908+0000 mon.vm05 (mon.0) 253 : cluster [DBG] fsmap 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519934+0000 mon.vm05 (mon.0) 254 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.519934+0000 mon.vm05 (mon.0) 254 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.520074+0000 mon.vm05 (mon.0) 255 : cluster [DBG] mgrmap e16: vm05.wnsmpp(active, since 16s) 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.520074+0000 mon.vm05 (mon.0) 255 : cluster [DBG] mgrmap e16: vm05.wnsmpp(active, since 16s) 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.520166+0000 mon.vm05 (mon.0) 256 : cluster [INF] overall HEALTH_OK 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.520166+0000 mon.vm05 (mon.0) 256 : cluster [INF] overall HEALTH_OK 2026-03-10T07:17:59.877 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.520363+0000 mon.vm05 (mon.0) 257 : cluster [DBG] Standby manager daemon vm09.rfdvwa started 2026-03-10T07:17:59.878 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: cluster 2026-03-10T07:17:59.520363+0000 mon.vm05 (mon.0) 257 : cluster [DBG] Standby manager daemon vm09.rfdvwa started 2026-03-10T07:17:59.878 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:59.521749+0000 mon.vm05 (mon.0) 258 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:17:59.878 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:59.521749+0000 mon.vm05 (mon.0) 258 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:17:59.878 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:59.522545+0000 mon.vm05 (mon.0) 259 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.rfdvwa/key"}]: dispatch 2026-03-10T07:17:59.878 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:59.522545+0000 mon.vm05 (mon.0) 259 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.rfdvwa/key"}]: dispatch 2026-03-10T07:17:59.878 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:59.522824+0000 mon.vm05 (mon.0) 260 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:17:59.878 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:59.522824+0000 mon.vm05 (mon.0) 260 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:17:59.878 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:59.526948+0000 mon.vm05 (mon.0) 261 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:59.878 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:17:59 vm09 bash[21099]: audit 2026-03-10T07:17:59.526948+0000 mon.vm05 (mon.0) 261 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:59.940 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T07:17:59.940 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph config generate-minimal-conf 2026-03-10T07:17:59.948 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:54.510932+0000 mon.vm05 (mon.0) 234 : cluster [INF] mon.vm05 calling monitor election 2026-03-10T07:17:59.948 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:54.510932+0000 mon.vm05 (mon.0) 234 : cluster [INF] mon.vm05 calling monitor election 2026-03-10T07:17:59.948 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:54.511897+0000 mon.vm05 (mon.0) 235 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:17:59.948 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:54.511897+0000 mon.vm05 (mon.0) 235 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:17:59.948 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:54.511936+0000 mon.vm05 (mon.0) 236 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.948 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:54.511936+0000 mon.vm05 (mon.0) 236 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.948 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:55.499540+0000 mon.vm05 (mon.0) 237 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.948 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:55.499540+0000 mon.vm05 (mon.0) 237 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.948 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:56.497973+0000 mon.vm09 (mon.1) 1 : cluster [INF] mon.vm09 calling monitor election 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:56.497973+0000 mon.vm09 (mon.1) 1 : cluster [INF] mon.vm09 calling monitor election 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:56.499603+0000 mon.vm05 (mon.0) 238 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:56.499603+0000 mon.vm05 (mon.0) 238 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:56.911680+0000 mon.vm05 (mon.0) 239 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.rfdvwa/crt"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:56.911680+0000 mon.vm05 (mon.0) 239 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.rfdvwa/crt"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:57.499766+0000 mon.vm05 (mon.0) 240 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:57.499766+0000 mon.vm05 (mon.0) 240 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:57.672558+0000 mon.vm05 (mon.0) 241 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:57.672558+0000 mon.vm05 (mon.0) 241 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:58.499912+0000 mon.vm05 (mon.0) 242 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:58.499912+0000 mon.vm05 (mon.0) 242 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:59.500503+0000 mon.vm05 (mon.0) 243 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:59.500503+0000 mon.vm05 (mon.0) 243 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.515553+0000 mon.vm05 (mon.0) 244 : cluster [INF] mon.vm05 is new leader, mons vm05,vm09 in quorum (ranks 0,1) 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.515553+0000 mon.vm05 (mon.0) 244 : cluster [INF] mon.vm05 is new leader, mons vm05,vm09 in quorum (ranks 0,1) 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519438+0000 mon.vm05 (mon.0) 245 : cluster [DBG] monmap epoch 2 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519438+0000 mon.vm05 (mon.0) 245 : cluster [DBG] monmap epoch 2 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519459+0000 mon.vm05 (mon.0) 246 : cluster [DBG] fsid f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519459+0000 mon.vm05 (mon.0) 246 : cluster [DBG] fsid f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519471+0000 mon.vm05 (mon.0) 247 : cluster [DBG] last_changed 2026-03-10T07:17:54.507938+0000 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519471+0000 mon.vm05 (mon.0) 247 : cluster [DBG] last_changed 2026-03-10T07:17:54.507938+0000 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519480+0000 mon.vm05 (mon.0) 248 : cluster [DBG] created 2026-03-10T07:16:38.694276+0000 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519480+0000 mon.vm05 (mon.0) 248 : cluster [DBG] created 2026-03-10T07:16:38.694276+0000 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519489+0000 mon.vm05 (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519489+0000 mon.vm05 (mon.0) 249 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519498+0000 mon.vm05 (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519498+0000 mon.vm05 (mon.0) 250 : cluster [DBG] election_strategy: 1 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519508+0000 mon.vm05 (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519508+0000 mon.vm05 (mon.0) 251 : cluster [DBG] 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519516+0000 mon.vm05 (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519516+0000 mon.vm05 (mon.0) 252 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519908+0000 mon.vm05 (mon.0) 253 : cluster [DBG] fsmap 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519908+0000 mon.vm05 (mon.0) 253 : cluster [DBG] fsmap 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519934+0000 mon.vm05 (mon.0) 254 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.519934+0000 mon.vm05 (mon.0) 254 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.520074+0000 mon.vm05 (mon.0) 255 : cluster [DBG] mgrmap e16: vm05.wnsmpp(active, since 16s) 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.520074+0000 mon.vm05 (mon.0) 255 : cluster [DBG] mgrmap e16: vm05.wnsmpp(active, since 16s) 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.520166+0000 mon.vm05 (mon.0) 256 : cluster [INF] overall HEALTH_OK 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.520166+0000 mon.vm05 (mon.0) 256 : cluster [INF] overall HEALTH_OK 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.520363+0000 mon.vm05 (mon.0) 257 : cluster [DBG] Standby manager daemon vm09.rfdvwa started 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: cluster 2026-03-10T07:17:59.520363+0000 mon.vm05 (mon.0) 257 : cluster [DBG] Standby manager daemon vm09.rfdvwa started 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:59.521749+0000 mon.vm05 (mon.0) 258 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:59.521749+0000 mon.vm05 (mon.0) 258 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:59.522545+0000 mon.vm05 (mon.0) 259 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.rfdvwa/key"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:59.522545+0000 mon.vm05 (mon.0) 259 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.rfdvwa/key"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:59.522824+0000 mon.vm05 (mon.0) 260 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:59.522824+0000 mon.vm05 (mon.0) 260 : audit [DBG] from='mgr.? 192.168.123.109:0/2735881199' entity='mgr.vm09.rfdvwa' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:59.526948+0000 mon.vm05 (mon.0) 261 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:17:59.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:17:59 vm05 bash[17520]: audit 2026-03-10T07:17:59.526948+0000 mon.vm05 (mon.0) 261 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.670 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cluster 2026-03-10T07:17:59.587765+0000 mon.vm05 (mon.0) 262 : cluster [DBG] mgrmap e17: vm05.wnsmpp(active, since 16s), standbys: vm09.rfdvwa 2026-03-10T07:18:00.670 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cluster 2026-03-10T07:17:59.587765+0000 mon.vm05 (mon.0) 262 : cluster [DBG] mgrmap e17: vm05.wnsmpp(active, since 16s), standbys: vm09.rfdvwa 2026-03-10T07:18:00.670 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:17:59.587836+0000 mon.vm05 (mon.0) 263 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr metadata", "who": "vm09.rfdvwa", "id": "vm09.rfdvwa"}]: dispatch 2026-03-10T07:18:00.670 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:17:59.587836+0000 mon.vm05 (mon.0) 263 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr metadata", "who": "vm09.rfdvwa", "id": "vm09.rfdvwa"}]: dispatch 2026-03-10T07:18:00.670 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:17:59.865036+0000 mon.vm05 (mon.0) 264 : audit [DBG] from='client.? 192.168.123.109:0/3012372970' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:18:00.670 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:17:59.865036+0000 mon.vm05 (mon.0) 264 : audit [DBG] from='client.? 192.168.123.109:0/3012372970' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:18:00.670 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:17:59.891731+0000 mon.vm05 (mon.0) 265 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.670 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:17:59.891731+0000 mon.vm05 (mon.0) 265 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.670 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:17:59.897895+0000 mon.vm05 (mon.0) 266 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.670 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:17:59.897895+0000 mon.vm05 (mon.0) 266 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.670 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:17:59.898690+0000 mon.vm05 (mon.0) 267 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:00.670 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:17:59.898690+0000 mon.vm05 (mon.0) 267 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:17:59.899196+0000 mon.vm05 (mon.0) 268 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:17:59.899196+0000 mon.vm05 (mon.0) 268 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cephadm 2026-03-10T07:17:59.899807+0000 mgr.vm05.wnsmpp (mgr.14195) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cephadm 2026-03-10T07:17:59.899807+0000 mgr.vm05.wnsmpp (mgr.14195) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cephadm 2026-03-10T07:17:59.899915+0000 mgr.vm05.wnsmpp (mgr.14195) 20 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cephadm 2026-03-10T07:17:59.899915+0000 mgr.vm05.wnsmpp (mgr.14195) 20 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cephadm 2026-03-10T07:17:59.949264+0000 mgr.vm05.wnsmpp (mgr.14195) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cephadm 2026-03-10T07:17:59.949264+0000 mgr.vm05.wnsmpp (mgr.14195) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cephadm 2026-03-10T07:17:59.956218+0000 mgr.vm05.wnsmpp (mgr.14195) 22 : cephadm [INF] Updating vm09:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cephadm 2026-03-10T07:17:59.956218+0000 mgr.vm05.wnsmpp (mgr.14195) 22 : cephadm [INF] Updating vm09:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.020766+0000 mon.vm05 (mon.0) 269 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.020766+0000 mon.vm05 (mon.0) 269 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.024655+0000 mon.vm05 (mon.0) 270 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.024655+0000 mon.vm05 (mon.0) 270 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.029094+0000 mon.vm05 (mon.0) 271 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.029094+0000 mon.vm05 (mon.0) 271 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.037296+0000 mon.vm05 (mon.0) 272 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.037296+0000 mon.vm05 (mon.0) 272 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.045643+0000 mon.vm05 (mon.0) 273 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.045643+0000 mon.vm05 (mon.0) 273 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cephadm 2026-03-10T07:18:00.060264+0000 mgr.vm05.wnsmpp (mgr.14195) 23 : cephadm [INF] Reconfiguring crash.vm05 (monmap changed)... 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cephadm 2026-03-10T07:18:00.060264+0000 mgr.vm05.wnsmpp (mgr.14195) 23 : cephadm [INF] Reconfiguring crash.vm05 (monmap changed)... 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.060591+0000 mon.vm05 (mon.0) 274 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.060591+0000 mon.vm05 (mon.0) 274 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.061281+0000 mon.vm05 (mon.0) 275 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.061281+0000 mon.vm05 (mon.0) 275 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cephadm 2026-03-10T07:18:00.062475+0000 mgr.vm05.wnsmpp (mgr.14195) 24 : cephadm [INF] Reconfiguring daemon crash.vm05 on vm05 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: cephadm 2026-03-10T07:18:00.062475+0000 mgr.vm05.wnsmpp (mgr.14195) 24 : cephadm [INF] Reconfiguring daemon crash.vm05 on vm05 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.500328+0000 mon.vm05 (mon.0) 276 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:18:00.671 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:00 vm05 bash[17520]: audit 2026-03-10T07:18:00.500328+0000 mon.vm05 (mon.0) 276 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:18:00.765 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cluster 2026-03-10T07:17:59.587765+0000 mon.vm05 (mon.0) 262 : cluster [DBG] mgrmap e17: vm05.wnsmpp(active, since 16s), standbys: vm09.rfdvwa 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cluster 2026-03-10T07:17:59.587765+0000 mon.vm05 (mon.0) 262 : cluster [DBG] mgrmap e17: vm05.wnsmpp(active, since 16s), standbys: vm09.rfdvwa 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:17:59.587836+0000 mon.vm05 (mon.0) 263 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr metadata", "who": "vm09.rfdvwa", "id": "vm09.rfdvwa"}]: dispatch 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:17:59.587836+0000 mon.vm05 (mon.0) 263 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr metadata", "who": "vm09.rfdvwa", "id": "vm09.rfdvwa"}]: dispatch 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:17:59.865036+0000 mon.vm05 (mon.0) 264 : audit [DBG] from='client.? 192.168.123.109:0/3012372970' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:17:59.865036+0000 mon.vm05 (mon.0) 264 : audit [DBG] from='client.? 192.168.123.109:0/3012372970' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:17:59.891731+0000 mon.vm05 (mon.0) 265 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:17:59.891731+0000 mon.vm05 (mon.0) 265 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:17:59.897895+0000 mon.vm05 (mon.0) 266 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:17:59.897895+0000 mon.vm05 (mon.0) 266 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:17:59.898690+0000 mon.vm05 (mon.0) 267 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:17:59.898690+0000 mon.vm05 (mon.0) 267 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:17:59.899196+0000 mon.vm05 (mon.0) 268 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:17:59.899196+0000 mon.vm05 (mon.0) 268 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cephadm 2026-03-10T07:17:59.899807+0000 mgr.vm05.wnsmpp (mgr.14195) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cephadm 2026-03-10T07:17:59.899807+0000 mgr.vm05.wnsmpp (mgr.14195) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cephadm 2026-03-10T07:17:59.899915+0000 mgr.vm05.wnsmpp (mgr.14195) 20 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cephadm 2026-03-10T07:17:59.899915+0000 mgr.vm05.wnsmpp (mgr.14195) 20 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cephadm 2026-03-10T07:17:59.949264+0000 mgr.vm05.wnsmpp (mgr.14195) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cephadm 2026-03-10T07:17:59.949264+0000 mgr.vm05.wnsmpp (mgr.14195) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cephadm 2026-03-10T07:17:59.956218+0000 mgr.vm05.wnsmpp (mgr.14195) 22 : cephadm [INF] Updating vm09:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cephadm 2026-03-10T07:17:59.956218+0000 mgr.vm05.wnsmpp (mgr.14195) 22 : cephadm [INF] Updating vm09:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/config/ceph.conf 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.020766+0000 mon.vm05 (mon.0) 269 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.020766+0000 mon.vm05 (mon.0) 269 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.024655+0000 mon.vm05 (mon.0) 270 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.024655+0000 mon.vm05 (mon.0) 270 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.029094+0000 mon.vm05 (mon.0) 271 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.029094+0000 mon.vm05 (mon.0) 271 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.037296+0000 mon.vm05 (mon.0) 272 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.037296+0000 mon.vm05 (mon.0) 272 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.045643+0000 mon.vm05 (mon.0) 273 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.045643+0000 mon.vm05 (mon.0) 273 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:00.766 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cephadm 2026-03-10T07:18:00.060264+0000 mgr.vm05.wnsmpp (mgr.14195) 23 : cephadm [INF] Reconfiguring crash.vm05 (monmap changed)... 2026-03-10T07:18:00.767 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cephadm 2026-03-10T07:18:00.060264+0000 mgr.vm05.wnsmpp (mgr.14195) 23 : cephadm [INF] Reconfiguring crash.vm05 (monmap changed)... 2026-03-10T07:18:00.767 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.060591+0000 mon.vm05 (mon.0) 274 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:18:00.767 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.060591+0000 mon.vm05 (mon.0) 274 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:18:00.767 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.061281+0000 mon.vm05 (mon.0) 275 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:00.767 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.061281+0000 mon.vm05 (mon.0) 275 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:00.767 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cephadm 2026-03-10T07:18:00.062475+0000 mgr.vm05.wnsmpp (mgr.14195) 24 : cephadm [INF] Reconfiguring daemon crash.vm05 on vm05 2026-03-10T07:18:00.767 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: cephadm 2026-03-10T07:18:00.062475+0000 mgr.vm05.wnsmpp (mgr.14195) 24 : cephadm [INF] Reconfiguring daemon crash.vm05 on vm05 2026-03-10T07:18:00.767 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.500328+0000 mon.vm05 (mon.0) 276 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:18:00.767 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:00 vm09 bash[21099]: audit 2026-03-10T07:18:00.500328+0000 mon.vm05 (mon.0) 276 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:00.905565+0000 mon.vm05 (mon.0) 277 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:00.905565+0000 mon.vm05 (mon.0) 277 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:00.911971+0000 mon.vm05 (mon.0) 278 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:00.911971+0000 mon.vm05 (mon.0) 278 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: cephadm 2026-03-10T07:18:00.913118+0000 mgr.vm05.wnsmpp (mgr.14195) 25 : cephadm [INF] Reconfiguring ceph-exporter.vm05 (monmap changed)... 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: cephadm 2026-03-10T07:18:00.913118+0000 mgr.vm05.wnsmpp (mgr.14195) 25 : cephadm [INF] Reconfiguring ceph-exporter.vm05 (monmap changed)... 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:00.913385+0000 mon.vm05 (mon.0) 279 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:00.913385+0000 mon.vm05 (mon.0) 279 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:00.914037+0000 mon.vm05 (mon.0) 280 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:00.914037+0000 mon.vm05 (mon.0) 280 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: cephadm 2026-03-10T07:18:00.914653+0000 mgr.vm05.wnsmpp (mgr.14195) 26 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm05 on vm05 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: cephadm 2026-03-10T07:18:00.914653+0000 mgr.vm05.wnsmpp (mgr.14195) 26 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm05 on vm05 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.326857+0000 mon.vm05 (mon.0) 281 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.326857+0000 mon.vm05 (mon.0) 281 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.330963+0000 mon.vm05 (mon.0) 282 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.330963+0000 mon.vm05 (mon.0) 282 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: cephadm 2026-03-10T07:18:01.331676+0000 mgr.vm05.wnsmpp (mgr.14195) 27 : cephadm [INF] Reconfiguring mgr.vm05.wnsmpp (unknown last config time)... 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: cephadm 2026-03-10T07:18:01.331676+0000 mgr.vm05.wnsmpp (mgr.14195) 27 : cephadm [INF] Reconfiguring mgr.vm05.wnsmpp (unknown last config time)... 2026-03-10T07:18:01.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.331889+0000 mon.vm05 (mon.0) 283 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.wnsmpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:18:01.909 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.331889+0000 mon.vm05 (mon.0) 283 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.wnsmpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:18:01.909 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.332513+0000 mon.vm05 (mon.0) 284 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:18:01.909 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.332513+0000 mon.vm05 (mon.0) 284 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:18:01.909 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.332932+0000 mon.vm05 (mon.0) 285 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:01.909 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.332932+0000 mon.vm05 (mon.0) 285 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:01.909 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: cephadm 2026-03-10T07:18:01.333414+0000 mgr.vm05.wnsmpp (mgr.14195) 28 : cephadm [INF] Reconfiguring daemon mgr.vm05.wnsmpp on vm05 2026-03-10T07:18:01.909 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: cephadm 2026-03-10T07:18:01.333414+0000 mgr.vm05.wnsmpp (mgr.14195) 28 : cephadm [INF] Reconfiguring daemon mgr.vm05.wnsmpp on vm05 2026-03-10T07:18:01.909 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.724166+0000 mon.vm05 (mon.0) 286 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:01.909 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.724166+0000 mon.vm05 (mon.0) 286 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:01.909 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.728904+0000 mon.vm05 (mon.0) 287 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:01.909 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:01 vm05 bash[17520]: audit 2026-03-10T07:18:01.728904+0000 mon.vm05 (mon.0) 287 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:00.905565+0000 mon.vm05 (mon.0) 277 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:00.905565+0000 mon.vm05 (mon.0) 277 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:00.911971+0000 mon.vm05 (mon.0) 278 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:00.911971+0000 mon.vm05 (mon.0) 278 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: cephadm 2026-03-10T07:18:00.913118+0000 mgr.vm05.wnsmpp (mgr.14195) 25 : cephadm [INF] Reconfiguring ceph-exporter.vm05 (monmap changed)... 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: cephadm 2026-03-10T07:18:00.913118+0000 mgr.vm05.wnsmpp (mgr.14195) 25 : cephadm [INF] Reconfiguring ceph-exporter.vm05 (monmap changed)... 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:00.913385+0000 mon.vm05 (mon.0) 279 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:00.913385+0000 mon.vm05 (mon.0) 279 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:00.914037+0000 mon.vm05 (mon.0) 280 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:00.914037+0000 mon.vm05 (mon.0) 280 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: cephadm 2026-03-10T07:18:00.914653+0000 mgr.vm05.wnsmpp (mgr.14195) 26 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm05 on vm05 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: cephadm 2026-03-10T07:18:00.914653+0000 mgr.vm05.wnsmpp (mgr.14195) 26 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm05 on vm05 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.326857+0000 mon.vm05 (mon.0) 281 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.326857+0000 mon.vm05 (mon.0) 281 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.330963+0000 mon.vm05 (mon.0) 282 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.330963+0000 mon.vm05 (mon.0) 282 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: cephadm 2026-03-10T07:18:01.331676+0000 mgr.vm05.wnsmpp (mgr.14195) 27 : cephadm [INF] Reconfiguring mgr.vm05.wnsmpp (unknown last config time)... 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: cephadm 2026-03-10T07:18:01.331676+0000 mgr.vm05.wnsmpp (mgr.14195) 27 : cephadm [INF] Reconfiguring mgr.vm05.wnsmpp (unknown last config time)... 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.331889+0000 mon.vm05 (mon.0) 283 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.wnsmpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.331889+0000 mon.vm05 (mon.0) 283 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.wnsmpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.332513+0000 mon.vm05 (mon.0) 284 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.332513+0000 mon.vm05 (mon.0) 284 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.332932+0000 mon.vm05 (mon.0) 285 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.332932+0000 mon.vm05 (mon.0) 285 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:02.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: cephadm 2026-03-10T07:18:01.333414+0000 mgr.vm05.wnsmpp (mgr.14195) 28 : cephadm [INF] Reconfiguring daemon mgr.vm05.wnsmpp on vm05 2026-03-10T07:18:02.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: cephadm 2026-03-10T07:18:01.333414+0000 mgr.vm05.wnsmpp (mgr.14195) 28 : cephadm [INF] Reconfiguring daemon mgr.vm05.wnsmpp on vm05 2026-03-10T07:18:02.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.724166+0000 mon.vm05 (mon.0) 286 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:02.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.724166+0000 mon.vm05 (mon.0) 286 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:02.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.728904+0000 mon.vm05 (mon.0) 287 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:02.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:01 vm09 bash[21099]: audit 2026-03-10T07:18:01.728904+0000 mon.vm05 (mon.0) 287 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:03.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:02 vm09 bash[21099]: cephadm 2026-03-10T07:18:01.729653+0000 mgr.vm05.wnsmpp (mgr.14195) 29 : cephadm [INF] Reconfiguring alertmanager.vm05 (dependencies changed)... 2026-03-10T07:18:03.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:02 vm09 bash[21099]: cephadm 2026-03-10T07:18:01.729653+0000 mgr.vm05.wnsmpp (mgr.14195) 29 : cephadm [INF] Reconfiguring alertmanager.vm05 (dependencies changed)... 2026-03-10T07:18:03.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:02 vm09 bash[21099]: cephadm 2026-03-10T07:18:01.735517+0000 mgr.vm05.wnsmpp (mgr.14195) 30 : cephadm [INF] Reconfiguring daemon alertmanager.vm05 on vm05 2026-03-10T07:18:03.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:02 vm09 bash[21099]: cephadm 2026-03-10T07:18:01.735517+0000 mgr.vm05.wnsmpp (mgr.14195) 30 : cephadm [INF] Reconfiguring daemon alertmanager.vm05 on vm05 2026-03-10T07:18:03.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:02 vm09 bash[21099]: audit 2026-03-10T07:18:02.465510+0000 mon.vm05 (mon.0) 288 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:03.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:02 vm09 bash[21099]: audit 2026-03-10T07:18:02.465510+0000 mon.vm05 (mon.0) 288 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:03.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:02 vm09 bash[21099]: audit 2026-03-10T07:18:02.470805+0000 mon.vm05 (mon.0) 289 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:03.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:02 vm09 bash[21099]: audit 2026-03-10T07:18:02.470805+0000 mon.vm05 (mon.0) 289 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:03.175 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:02 vm05 bash[17520]: cephadm 2026-03-10T07:18:01.729653+0000 mgr.vm05.wnsmpp (mgr.14195) 29 : cephadm [INF] Reconfiguring alertmanager.vm05 (dependencies changed)... 2026-03-10T07:18:03.175 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:02 vm05 bash[17520]: cephadm 2026-03-10T07:18:01.729653+0000 mgr.vm05.wnsmpp (mgr.14195) 29 : cephadm [INF] Reconfiguring alertmanager.vm05 (dependencies changed)... 2026-03-10T07:18:03.175 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:02 vm05 bash[17520]: cephadm 2026-03-10T07:18:01.735517+0000 mgr.vm05.wnsmpp (mgr.14195) 30 : cephadm [INF] Reconfiguring daemon alertmanager.vm05 on vm05 2026-03-10T07:18:03.175 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:02 vm05 bash[17520]: cephadm 2026-03-10T07:18:01.735517+0000 mgr.vm05.wnsmpp (mgr.14195) 30 : cephadm [INF] Reconfiguring daemon alertmanager.vm05 on vm05 2026-03-10T07:18:03.175 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:02 vm05 bash[17520]: audit 2026-03-10T07:18:02.465510+0000 mon.vm05 (mon.0) 288 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:03.175 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:02 vm05 bash[17520]: audit 2026-03-10T07:18:02.465510+0000 mon.vm05 (mon.0) 288 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:03.175 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:02 vm05 bash[17520]: audit 2026-03-10T07:18:02.470805+0000 mon.vm05 (mon.0) 289 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:03.175 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:02 vm05 bash[17520]: audit 2026-03-10T07:18:02.470805+0000 mon.vm05 (mon.0) 289 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: cephadm 2026-03-10T07:18:02.471736+0000 mgr.vm05.wnsmpp (mgr.14195) 31 : cephadm [INF] Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: cephadm 2026-03-10T07:18:02.471736+0000 mgr.vm05.wnsmpp (mgr.14195) 31 : cephadm [INF] Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: cluster 2026-03-10T07:18:02.635096+0000 mgr.vm05.wnsmpp (mgr.14195) 32 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: cluster 2026-03-10T07:18:02.635096+0000 mgr.vm05.wnsmpp (mgr.14195) 32 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: cephadm 2026-03-10T07:18:02.664999+0000 mgr.vm05.wnsmpp (mgr.14195) 33 : cephadm [INF] Reconfiguring daemon prometheus.vm05 on vm05 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: cephadm 2026-03-10T07:18:02.664999+0000 mgr.vm05.wnsmpp (mgr.14195) 33 : cephadm [INF] Reconfiguring daemon prometheus.vm05 on vm05 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.444567+0000 mon.vm05 (mon.0) 290 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.444567+0000 mon.vm05 (mon.0) 290 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.458778+0000 mon.vm05 (mon.0) 291 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.458778+0000 mon.vm05 (mon.0) 291 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.461002+0000 mon.vm05 (mon.0) 292 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.461002+0000 mon.vm05 (mon.0) 292 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.461818+0000 mon.vm05 (mon.0) 293 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.461818+0000 mon.vm05 (mon.0) 293 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.462249+0000 mon.vm05 (mon.0) 294 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.462249+0000 mon.vm05 (mon.0) 294 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.897618+0000 mon.vm05 (mon.0) 295 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.897618+0000 mon.vm05 (mon.0) 295 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.902397+0000 mon.vm05 (mon.0) 296 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:03 vm09 bash[21099]: audit 2026-03-10T07:18:03.902397+0000 mon.vm05 (mon.0) 296 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: cephadm 2026-03-10T07:18:02.471736+0000 mgr.vm05.wnsmpp (mgr.14195) 31 : cephadm [INF] Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: cephadm 2026-03-10T07:18:02.471736+0000 mgr.vm05.wnsmpp (mgr.14195) 31 : cephadm [INF] Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: cluster 2026-03-10T07:18:02.635096+0000 mgr.vm05.wnsmpp (mgr.14195) 32 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: cluster 2026-03-10T07:18:02.635096+0000 mgr.vm05.wnsmpp (mgr.14195) 32 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: cephadm 2026-03-10T07:18:02.664999+0000 mgr.vm05.wnsmpp (mgr.14195) 33 : cephadm [INF] Reconfiguring daemon prometheus.vm05 on vm05 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: cephadm 2026-03-10T07:18:02.664999+0000 mgr.vm05.wnsmpp (mgr.14195) 33 : cephadm [INF] Reconfiguring daemon prometheus.vm05 on vm05 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.444567+0000 mon.vm05 (mon.0) 290 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.444567+0000 mon.vm05 (mon.0) 290 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.458778+0000 mon.vm05 (mon.0) 291 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.458778+0000 mon.vm05 (mon.0) 291 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.461002+0000 mon.vm05 (mon.0) 292 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.461002+0000 mon.vm05 (mon.0) 292 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.461818+0000 mon.vm05 (mon.0) 293 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.461818+0000 mon.vm05 (mon.0) 293 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.462249+0000 mon.vm05 (mon.0) 294 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.462249+0000 mon.vm05 (mon.0) 294 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.897618+0000 mon.vm05 (mon.0) 295 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.897618+0000 mon.vm05 (mon.0) 295 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.902397+0000 mon.vm05 (mon.0) 296 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:03 vm05 bash[17520]: audit 2026-03-10T07:18:03.902397+0000 mon.vm05 (mon.0) 296 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: cephadm 2026-03-10T07:18:03.460619+0000 mgr.vm05.wnsmpp (mgr.14195) 34 : cephadm [INF] Reconfiguring mon.vm05 (unknown last config time)... 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: cephadm 2026-03-10T07:18:03.460619+0000 mgr.vm05.wnsmpp (mgr.14195) 34 : cephadm [INF] Reconfiguring mon.vm05 (unknown last config time)... 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: cephadm 2026-03-10T07:18:03.462938+0000 mgr.vm05.wnsmpp (mgr.14195) 35 : cephadm [INF] Reconfiguring daemon mon.vm05 on vm05 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: cephadm 2026-03-10T07:18:03.462938+0000 mgr.vm05.wnsmpp (mgr.14195) 35 : cephadm [INF] Reconfiguring daemon mon.vm05 on vm05 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: cephadm 2026-03-10T07:18:03.903385+0000 mgr.vm05.wnsmpp (mgr.14195) 36 : cephadm [INF] Reconfiguring grafana.vm05 (dependencies changed)... 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: cephadm 2026-03-10T07:18:03.903385+0000 mgr.vm05.wnsmpp (mgr.14195) 36 : cephadm [INF] Reconfiguring grafana.vm05 (dependencies changed)... 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: cephadm 2026-03-10T07:18:03.940126+0000 mgr.vm05.wnsmpp (mgr.14195) 37 : cephadm [INF] Reconfiguring daemon grafana.vm05 on vm05 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: cephadm 2026-03-10T07:18:03.940126+0000 mgr.vm05.wnsmpp (mgr.14195) 37 : cephadm [INF] Reconfiguring daemon grafana.vm05 on vm05 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: audit 2026-03-10T07:18:04.621994+0000 mon.vm05 (mon.0) 297 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: audit 2026-03-10T07:18:04.621994+0000 mon.vm05 (mon.0) 297 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: audit 2026-03-10T07:18:04.627955+0000 mon.vm05 (mon.0) 298 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: audit 2026-03-10T07:18:04.627955+0000 mon.vm05 (mon.0) 298 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: audit 2026-03-10T07:18:04.631383+0000 mon.vm05 (mon.0) 299 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: audit 2026-03-10T07:18:04.631383+0000 mon.vm05 (mon.0) 299 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: audit 2026-03-10T07:18:04.636909+0000 mon.vm05 (mon.0) 300 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:05.064 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:04 vm05 bash[17520]: audit 2026-03-10T07:18:04.636909+0000 mon.vm05 (mon.0) 300 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: cephadm 2026-03-10T07:18:03.460619+0000 mgr.vm05.wnsmpp (mgr.14195) 34 : cephadm [INF] Reconfiguring mon.vm05 (unknown last config time)... 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: cephadm 2026-03-10T07:18:03.460619+0000 mgr.vm05.wnsmpp (mgr.14195) 34 : cephadm [INF] Reconfiguring mon.vm05 (unknown last config time)... 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: cephadm 2026-03-10T07:18:03.462938+0000 mgr.vm05.wnsmpp (mgr.14195) 35 : cephadm [INF] Reconfiguring daemon mon.vm05 on vm05 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: cephadm 2026-03-10T07:18:03.462938+0000 mgr.vm05.wnsmpp (mgr.14195) 35 : cephadm [INF] Reconfiguring daemon mon.vm05 on vm05 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: cephadm 2026-03-10T07:18:03.903385+0000 mgr.vm05.wnsmpp (mgr.14195) 36 : cephadm [INF] Reconfiguring grafana.vm05 (dependencies changed)... 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: cephadm 2026-03-10T07:18:03.903385+0000 mgr.vm05.wnsmpp (mgr.14195) 36 : cephadm [INF] Reconfiguring grafana.vm05 (dependencies changed)... 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: cephadm 2026-03-10T07:18:03.940126+0000 mgr.vm05.wnsmpp (mgr.14195) 37 : cephadm [INF] Reconfiguring daemon grafana.vm05 on vm05 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: cephadm 2026-03-10T07:18:03.940126+0000 mgr.vm05.wnsmpp (mgr.14195) 37 : cephadm [INF] Reconfiguring daemon grafana.vm05 on vm05 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: audit 2026-03-10T07:18:04.621994+0000 mon.vm05 (mon.0) 297 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: audit 2026-03-10T07:18:04.621994+0000 mon.vm05 (mon.0) 297 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: audit 2026-03-10T07:18:04.627955+0000 mon.vm05 (mon.0) 298 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: audit 2026-03-10T07:18:04.627955+0000 mon.vm05 (mon.0) 298 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: audit 2026-03-10T07:18:04.631383+0000 mon.vm05 (mon.0) 299 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: audit 2026-03-10T07:18:04.631383+0000 mon.vm05 (mon.0) 299 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: audit 2026-03-10T07:18:04.636909+0000 mon.vm05 (mon.0) 300 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:05.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:04 vm09 bash[21099]: audit 2026-03-10T07:18:04.636909+0000 mon.vm05 (mon.0) 300 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: cephadm 2026-03-10T07:18:04.628837+0000 mgr.vm05.wnsmpp (mgr.14195) 38 : cephadm [INF] Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: cephadm 2026-03-10T07:18:04.628837+0000 mgr.vm05.wnsmpp (mgr.14195) 38 : cephadm [INF] Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: cluster 2026-03-10T07:18:04.635429+0000 mgr.vm05.wnsmpp (mgr.14195) 39 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: cluster 2026-03-10T07:18:04.635429+0000 mgr.vm05.wnsmpp (mgr.14195) 39 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: cephadm 2026-03-10T07:18:04.637629+0000 mgr.vm05.wnsmpp (mgr.14195) 40 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: cephadm 2026-03-10T07:18:04.637629+0000 mgr.vm05.wnsmpp (mgr.14195) 40 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.041665+0000 mon.vm05 (mon.0) 301 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.041665+0000 mon.vm05 (mon.0) 301 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.046513+0000 mon.vm05 (mon.0) 302 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.046513+0000 mon.vm05 (mon.0) 302 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: cephadm 2026-03-10T07:18:05.047048+0000 mgr.vm05.wnsmpp (mgr.14195) 41 : cephadm [INF] Reconfiguring mon.vm09 (monmap changed)... 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: cephadm 2026-03-10T07:18:05.047048+0000 mgr.vm05.wnsmpp (mgr.14195) 41 : cephadm [INF] Reconfiguring mon.vm09 (monmap changed)... 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.047639+0000 mon.vm05 (mon.0) 303 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.047639+0000 mon.vm05 (mon.0) 303 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.048143+0000 mon.vm05 (mon.0) 304 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.048143+0000 mon.vm05 (mon.0) 304 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.048976+0000 mon.vm05 (mon.0) 305 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.048976+0000 mon.vm05 (mon.0) 305 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: cephadm 2026-03-10T07:18:05.049657+0000 mgr.vm05.wnsmpp (mgr.14195) 42 : cephadm [INF] Reconfiguring daemon mon.vm09 on vm09 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: cephadm 2026-03-10T07:18:05.049657+0000 mgr.vm05.wnsmpp (mgr.14195) 42 : cephadm [INF] Reconfiguring daemon mon.vm09 on vm09 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.455989+0000 mon.vm05 (mon.0) 306 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.455989+0000 mon.vm05 (mon.0) 306 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.461043+0000 mon.vm05 (mon.0) 307 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.461043+0000 mon.vm05 (mon.0) 307 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.462261+0000 mon.vm05 (mon.0) 308 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.462261+0000 mon.vm05 (mon.0) 308 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.462852+0000 mon.vm05 (mon.0) 309 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:06.373 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:06 vm09 bash[21099]: audit 2026-03-10T07:18:05.462852+0000 mon.vm05 (mon.0) 309 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:06.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: cephadm 2026-03-10T07:18:04.628837+0000 mgr.vm05.wnsmpp (mgr.14195) 38 : cephadm [INF] Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-10T07:18:06.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: cephadm 2026-03-10T07:18:04.628837+0000 mgr.vm05.wnsmpp (mgr.14195) 38 : cephadm [INF] Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-10T07:18:06.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: cluster 2026-03-10T07:18:04.635429+0000 mgr.vm05.wnsmpp (mgr.14195) 39 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:06.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: cluster 2026-03-10T07:18:04.635429+0000 mgr.vm05.wnsmpp (mgr.14195) 39 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:06.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: cephadm 2026-03-10T07:18:04.637629+0000 mgr.vm05.wnsmpp (mgr.14195) 40 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: cephadm 2026-03-10T07:18:04.637629+0000 mgr.vm05.wnsmpp (mgr.14195) 40 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.041665+0000 mon.vm05 (mon.0) 301 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.041665+0000 mon.vm05 (mon.0) 301 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.046513+0000 mon.vm05 (mon.0) 302 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.046513+0000 mon.vm05 (mon.0) 302 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: cephadm 2026-03-10T07:18:05.047048+0000 mgr.vm05.wnsmpp (mgr.14195) 41 : cephadm [INF] Reconfiguring mon.vm09 (monmap changed)... 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: cephadm 2026-03-10T07:18:05.047048+0000 mgr.vm05.wnsmpp (mgr.14195) 41 : cephadm [INF] Reconfiguring mon.vm09 (monmap changed)... 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.047639+0000 mon.vm05 (mon.0) 303 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.047639+0000 mon.vm05 (mon.0) 303 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.048143+0000 mon.vm05 (mon.0) 304 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.048143+0000 mon.vm05 (mon.0) 304 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.048976+0000 mon.vm05 (mon.0) 305 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.048976+0000 mon.vm05 (mon.0) 305 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: cephadm 2026-03-10T07:18:05.049657+0000 mgr.vm05.wnsmpp (mgr.14195) 42 : cephadm [INF] Reconfiguring daemon mon.vm09 on vm09 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: cephadm 2026-03-10T07:18:05.049657+0000 mgr.vm05.wnsmpp (mgr.14195) 42 : cephadm [INF] Reconfiguring daemon mon.vm09 on vm09 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.455989+0000 mon.vm05 (mon.0) 306 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.455989+0000 mon.vm05 (mon.0) 306 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.461043+0000 mon.vm05 (mon.0) 307 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.461043+0000 mon.vm05 (mon.0) 307 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.462261+0000 mon.vm05 (mon.0) 308 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.462261+0000 mon.vm05 (mon.0) 308 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.462852+0000 mon.vm05 (mon.0) 309 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:06.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:06 vm05 bash[17520]: audit 2026-03-10T07:18:05.462852+0000 mon.vm05 (mon.0) 309 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:06.629 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:18:06.901 INFO:teuthology.orchestra.run.vm05.stdout:# minimal ceph.conf for f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:18:06.901 INFO:teuthology.orchestra.run.vm05.stdout:[global] 2026-03-10T07:18:06.901 INFO:teuthology.orchestra.run.vm05.stdout: fsid = f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:18:06.901 INFO:teuthology.orchestra.run.vm05.stdout: mon_host = [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] 2026-03-10T07:18:06.957 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T07:18:06.957 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T07:18:06.957 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T07:18:07.008 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T07:18:07.008 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:18:07.055 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T07:18:07.055 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T07:18:07.062 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T07:18:07.062 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:18:07.112 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T07:18:07.112 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T07:18:07.112 DEBUG:teuthology.orchestra.run.vm05:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T07:18:07.116 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:18:07.116 DEBUG:teuthology.orchestra.run.vm05:> ls /dev/[sv]d? 2026-03-10T07:18:07.160 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vda 2026-03-10T07:18:07.161 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdb 2026-03-10T07:18:07.161 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdc 2026-03-10T07:18:07.161 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdd 2026-03-10T07:18:07.161 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vde 2026-03-10T07:18:07.161 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T07:18:07.161 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T07:18:07.161 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdb 2026-03-10T07:18:07.204 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdb 2026-03-10T07:18:07.204 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:18:07.204 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T07:18:07.204 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:18:07.204 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 07:12:30.757144854 +0000 2026-03-10T07:18:07.204 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 07:12:29.717144854 +0000 2026-03-10T07:18:07.204 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 07:12:29.717144854 +0000 2026-03-10T07:18:07.204 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T07:18:07.205 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T07:18:07.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: cephadm 2026-03-10T07:18:05.461909+0000 mgr.vm05.wnsmpp (mgr.14195) 43 : cephadm [INF] Reconfiguring crash.vm09 (monmap changed)... 2026-03-10T07:18:07.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: cephadm 2026-03-10T07:18:05.461909+0000 mgr.vm05.wnsmpp (mgr.14195) 43 : cephadm [INF] Reconfiguring crash.vm09 (monmap changed)... 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: cephadm 2026-03-10T07:18:05.463474+0000 mgr.vm05.wnsmpp (mgr.14195) 44 : cephadm [INF] Reconfiguring daemon crash.vm09 on vm09 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: cephadm 2026-03-10T07:18:05.463474+0000 mgr.vm05.wnsmpp (mgr.14195) 44 : cephadm [INF] Reconfiguring daemon crash.vm09 on vm09 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.059720+0000 mon.vm05 (mon.0) 310 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.059720+0000 mon.vm05 (mon.0) 310 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.073430+0000 mon.vm05 (mon.0) 311 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.073430+0000 mon.vm05 (mon.0) 311 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: cephadm 2026-03-10T07:18:06.074176+0000 mgr.vm05.wnsmpp (mgr.14195) 45 : cephadm [INF] Reconfiguring mgr.vm09.rfdvwa (monmap changed)... 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: cephadm 2026-03-10T07:18:06.074176+0000 mgr.vm05.wnsmpp (mgr.14195) 45 : cephadm [INF] Reconfiguring mgr.vm09.rfdvwa (monmap changed)... 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.074585+0000 mon.vm05 (mon.0) 312 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.rfdvwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.074585+0000 mon.vm05 (mon.0) 312 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.rfdvwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.075101+0000 mon.vm05 (mon.0) 313 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.075101+0000 mon.vm05 (mon.0) 313 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.075481+0000 mon.vm05 (mon.0) 314 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.075481+0000 mon.vm05 (mon.0) 314 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: cephadm 2026-03-10T07:18:06.076042+0000 mgr.vm05.wnsmpp (mgr.14195) 46 : cephadm [INF] Reconfiguring daemon mgr.vm09.rfdvwa on vm09 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: cephadm 2026-03-10T07:18:06.076042+0000 mgr.vm05.wnsmpp (mgr.14195) 46 : cephadm [INF] Reconfiguring daemon mgr.vm09.rfdvwa on vm09 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.490634+0000 mon.vm05 (mon.0) 315 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.490634+0000 mon.vm05 (mon.0) 315 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.495706+0000 mon.vm05 (mon.0) 316 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.495706+0000 mon.vm05 (mon.0) 316 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.499007+0000 mon.vm05 (mon.0) 317 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.499007+0000 mon.vm05 (mon.0) 317 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.500050+0000 mon.vm05 (mon.0) 318 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.500050+0000 mon.vm05 (mon.0) 318 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.504508+0000 mon.vm05 (mon.0) 319 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.504508+0000 mon.vm05 (mon.0) 319 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.514446+0000 mon.vm05 (mon.0) 320 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.514446+0000 mon.vm05 (mon.0) 320 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.515383+0000 mon.vm05 (mon.0) 321 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.515383+0000 mon.vm05 (mon.0) 321 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.519202+0000 mon.vm05 (mon.0) 322 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.519202+0000 mon.vm05 (mon.0) 322 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.525532+0000 mon.vm05 (mon.0) 323 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.525532+0000 mon.vm05 (mon.0) 323 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.527657+0000 mon.vm05 (mon.0) 324 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.527657+0000 mon.vm05 (mon.0) 324 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.531662+0000 mon.vm05 (mon.0) 325 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.531662+0000 mon.vm05 (mon.0) 325 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.566669+0000 mon.vm05 (mon.0) 326 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.566669+0000 mon.vm05 (mon.0) 326 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.902820+0000 mon.vm05 (mon.0) 327 : audit [DBG] from='client.? 192.168.123.105:0/3063592326' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:07.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:07 vm05 bash[17520]: audit 2026-03-10T07:18:06.902820+0000 mon.vm05 (mon.0) 327 : audit [DBG] from='client.? 192.168.123.105:0/3063592326' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:07.260 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T07:18:07.260 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T07:18:07.260 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000110707 s, 4.6 MB/s 2026-03-10T07:18:07.260 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T07:18:07.305 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdc 2026-03-10T07:18:07.352 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdc 2026-03-10T07:18:07.352 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:18:07.352 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T07:18:07.352 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:18:07.352 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 07:12:30.773144854 +0000 2026-03-10T07:18:07.352 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 07:12:29.709144854 +0000 2026-03-10T07:18:07.352 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 07:12:29.709144854 +0000 2026-03-10T07:18:07.352 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T07:18:07.352 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T07:18:07.400 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T07:18:07.400 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T07:18:07.400 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.00011774 s, 4.3 MB/s 2026-03-10T07:18:07.400 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: cephadm 2026-03-10T07:18:05.461909+0000 mgr.vm05.wnsmpp (mgr.14195) 43 : cephadm [INF] Reconfiguring crash.vm09 (monmap changed)... 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: cephadm 2026-03-10T07:18:05.461909+0000 mgr.vm05.wnsmpp (mgr.14195) 43 : cephadm [INF] Reconfiguring crash.vm09 (monmap changed)... 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: cephadm 2026-03-10T07:18:05.463474+0000 mgr.vm05.wnsmpp (mgr.14195) 44 : cephadm [INF] Reconfiguring daemon crash.vm09 on vm09 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: cephadm 2026-03-10T07:18:05.463474+0000 mgr.vm05.wnsmpp (mgr.14195) 44 : cephadm [INF] Reconfiguring daemon crash.vm09 on vm09 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.059720+0000 mon.vm05 (mon.0) 310 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.059720+0000 mon.vm05 (mon.0) 310 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.073430+0000 mon.vm05 (mon.0) 311 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.073430+0000 mon.vm05 (mon.0) 311 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: cephadm 2026-03-10T07:18:06.074176+0000 mgr.vm05.wnsmpp (mgr.14195) 45 : cephadm [INF] Reconfiguring mgr.vm09.rfdvwa (monmap changed)... 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: cephadm 2026-03-10T07:18:06.074176+0000 mgr.vm05.wnsmpp (mgr.14195) 45 : cephadm [INF] Reconfiguring mgr.vm09.rfdvwa (monmap changed)... 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.074585+0000 mon.vm05 (mon.0) 312 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.rfdvwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.074585+0000 mon.vm05 (mon.0) 312 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.rfdvwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.075101+0000 mon.vm05 (mon.0) 313 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.075101+0000 mon.vm05 (mon.0) 313 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.075481+0000 mon.vm05 (mon.0) 314 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.075481+0000 mon.vm05 (mon.0) 314 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: cephadm 2026-03-10T07:18:06.076042+0000 mgr.vm05.wnsmpp (mgr.14195) 46 : cephadm [INF] Reconfiguring daemon mgr.vm09.rfdvwa on vm09 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: cephadm 2026-03-10T07:18:06.076042+0000 mgr.vm05.wnsmpp (mgr.14195) 46 : cephadm [INF] Reconfiguring daemon mgr.vm09.rfdvwa on vm09 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.490634+0000 mon.vm05 (mon.0) 315 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.490634+0000 mon.vm05 (mon.0) 315 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.495706+0000 mon.vm05 (mon.0) 316 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.495706+0000 mon.vm05 (mon.0) 316 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.499007+0000 mon.vm05 (mon.0) 317 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.499007+0000 mon.vm05 (mon.0) 317 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.500050+0000 mon.vm05 (mon.0) 318 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.500050+0000 mon.vm05 (mon.0) 318 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.504508+0000 mon.vm05 (mon.0) 319 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.504508+0000 mon.vm05 (mon.0) 319 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.514446+0000 mon.vm05 (mon.0) 320 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.514446+0000 mon.vm05 (mon.0) 320 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.515383+0000 mon.vm05 (mon.0) 321 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.515383+0000 mon.vm05 (mon.0) 321 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.519202+0000 mon.vm05 (mon.0) 322 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.519202+0000 mon.vm05 (mon.0) 322 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.525532+0000 mon.vm05 (mon.0) 323 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.525532+0000 mon.vm05 (mon.0) 323 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.527657+0000 mon.vm05 (mon.0) 324 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.527657+0000 mon.vm05 (mon.0) 324 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.531662+0000 mon.vm05 (mon.0) 325 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.531662+0000 mon.vm05 (mon.0) 325 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.566669+0000 mon.vm05 (mon.0) 326 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.566669+0000 mon.vm05 (mon.0) 326 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.902820+0000 mon.vm05 (mon.0) 327 : audit [DBG] from='client.? 192.168.123.105:0/3063592326' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:07.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:07 vm09 bash[21099]: audit 2026-03-10T07:18:06.902820+0000 mon.vm05 (mon.0) 327 : audit [DBG] from='client.? 192.168.123.105:0/3063592326' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:07.447 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdd 2026-03-10T07:18:07.492 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdd 2026-03-10T07:18:07.493 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:18:07.493 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T07:18:07.493 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:18:07.493 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 07:12:30.753144854 +0000 2026-03-10T07:18:07.493 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 07:12:29.713144854 +0000 2026-03-10T07:18:07.493 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 07:12:29.713144854 +0000 2026-03-10T07:18:07.493 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T07:18:07.493 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T07:18:07.539 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T07:18:07.539 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T07:18:07.539 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000132819 s, 3.9 MB/s 2026-03-10T07:18:07.540 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T07:18:07.586 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vde 2026-03-10T07:18:07.632 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vde 2026-03-10T07:18:07.632 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:18:07.632 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T07:18:07.632 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:18:07.632 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 07:12:30.773144854 +0000 2026-03-10T07:18:07.632 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 07:12:29.729144854 +0000 2026-03-10T07:18:07.632 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 07:12:29.729144854 +0000 2026-03-10T07:18:07.632 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T07:18:07.632 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T07:18:07.679 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T07:18:07.679 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T07:18:07.679 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000145832 s, 3.5 MB/s 2026-03-10T07:18:07.680 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T07:18:07.725 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T07:18:07.725 DEBUG:teuthology.orchestra.run.vm09:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T07:18:07.728 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:18:07.728 DEBUG:teuthology.orchestra.run.vm09:> ls /dev/[sv]d? 2026-03-10T07:18:07.772 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vda 2026-03-10T07:18:07.772 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdb 2026-03-10T07:18:07.772 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdc 2026-03-10T07:18:07.772 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdd 2026-03-10T07:18:07.772 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vde 2026-03-10T07:18:07.772 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T07:18:07.772 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T07:18:07.772 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdb 2026-03-10T07:18:07.815 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdb 2026-03-10T07:18:07.815 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:18:07.815 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T07:18:07.815 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:18:07.815 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 07:12:00.022735290 +0000 2026-03-10T07:18:07.816 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 07:11:59.014735290 +0000 2026-03-10T07:18:07.816 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 07:11:59.014735290 +0000 2026-03-10T07:18:07.816 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-10T07:18:07.816 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T07:18:07.863 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T07:18:07.863 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T07:18:07.863 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000112861 s, 4.5 MB/s 2026-03-10T07:18:07.863 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T07:18:07.909 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdc 2026-03-10T07:18:07.952 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdc 2026-03-10T07:18:07.952 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:18:07.952 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T07:18:07.952 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:18:07.952 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 07:12:00.030735290 +0000 2026-03-10T07:18:07.952 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 07:11:59.010735290 +0000 2026-03-10T07:18:07.952 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 07:11:59.010735290 +0000 2026-03-10T07:18:07.952 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-10T07:18:07.952 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T07:18:07.999 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T07:18:07.999 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T07:18:07.999 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000106499 s, 4.8 MB/s 2026-03-10T07:18:07.999 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T07:18:08.044 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdd 2026-03-10T07:18:08.092 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdd 2026-03-10T07:18:08.092 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:18:08.092 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T07:18:08.092 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:18:08.092 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 07:12:00.022735290 +0000 2026-03-10T07:18:08.092 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 07:11:59.002735290 +0000 2026-03-10T07:18:08.092 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 07:11:59.002735290 +0000 2026-03-10T07:18:08.092 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-10T07:18:08.092 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: audit 2026-03-10T07:18:06.499349+0000 mgr.vm05.wnsmpp (mgr.14195) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: audit 2026-03-10T07:18:06.499349+0000 mgr.vm05.wnsmpp (mgr.14195) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: audit 2026-03-10T07:18:06.500399+0000 mgr.vm05.wnsmpp (mgr.14195) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: audit 2026-03-10T07:18:06.500399+0000 mgr.vm05.wnsmpp (mgr.14195) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: audit 2026-03-10T07:18:06.514756+0000 mgr.vm05.wnsmpp (mgr.14195) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: audit 2026-03-10T07:18:06.514756+0000 mgr.vm05.wnsmpp (mgr.14195) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: audit 2026-03-10T07:18:06.515618+0000 mgr.vm05.wnsmpp (mgr.14195) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: audit 2026-03-10T07:18:06.515618+0000 mgr.vm05.wnsmpp (mgr.14195) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: audit 2026-03-10T07:18:06.525824+0000 mgr.vm05.wnsmpp (mgr.14195) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: audit 2026-03-10T07:18:06.525824+0000 mgr.vm05.wnsmpp (mgr.14195) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: audit 2026-03-10T07:18:06.527929+0000 mgr.vm05.wnsmpp (mgr.14195) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: audit 2026-03-10T07:18:06.527929+0000 mgr.vm05.wnsmpp (mgr.14195) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: cluster 2026-03-10T07:18:06.635793+0000 mgr.vm05.wnsmpp (mgr.14195) 53 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:08.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:08 vm09 bash[21099]: cluster 2026-03-10T07:18:06.635793+0000 mgr.vm05.wnsmpp (mgr.14195) 53 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:08.140 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T07:18:08.140 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T07:18:08.140 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000120455 s, 4.3 MB/s 2026-03-10T07:18:08.141 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T07:18:08.184 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vde 2026-03-10T07:18:08.228 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vde 2026-03-10T07:18:08.228 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T07:18:08.228 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T07:18:08.228 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:18:08.228 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 07:12:00.030735290 +0000 2026-03-10T07:18:08.228 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 07:11:59.002735290 +0000 2026-03-10T07:18:08.228 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 07:11:59.002735290 +0000 2026-03-10T07:18:08.228 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-10T07:18:08.228 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T07:18:08.275 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T07:18:08.275 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T07:18:08.275 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000128401 s, 4.0 MB/s 2026-03-10T07:18:08.275 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T07:18:08.321 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch apply osd --all-available-devices 2026-03-10T07:18:08.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: audit 2026-03-10T07:18:06.499349+0000 mgr.vm05.wnsmpp (mgr.14195) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:18:08.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: audit 2026-03-10T07:18:06.499349+0000 mgr.vm05.wnsmpp (mgr.14195) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T07:18:08.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: audit 2026-03-10T07:18:06.500399+0000 mgr.vm05.wnsmpp (mgr.14195) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T07:18:08.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: audit 2026-03-10T07:18:06.500399+0000 mgr.vm05.wnsmpp (mgr.14195) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T07:18:08.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: audit 2026-03-10T07:18:06.514756+0000 mgr.vm05.wnsmpp (mgr.14195) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:18:08.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: audit 2026-03-10T07:18:06.514756+0000 mgr.vm05.wnsmpp (mgr.14195) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:18:08.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: audit 2026-03-10T07:18:06.515618+0000 mgr.vm05.wnsmpp (mgr.14195) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T07:18:08.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: audit 2026-03-10T07:18:06.515618+0000 mgr.vm05.wnsmpp (mgr.14195) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T07:18:08.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: audit 2026-03-10T07:18:06.525824+0000 mgr.vm05.wnsmpp (mgr.14195) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:18:08.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: audit 2026-03-10T07:18:06.525824+0000 mgr.vm05.wnsmpp (mgr.14195) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T07:18:08.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: audit 2026-03-10T07:18:06.527929+0000 mgr.vm05.wnsmpp (mgr.14195) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T07:18:08.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: audit 2026-03-10T07:18:06.527929+0000 mgr.vm05.wnsmpp (mgr.14195) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T07:18:08.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: cluster 2026-03-10T07:18:06.635793+0000 mgr.vm05.wnsmpp (mgr.14195) 53 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:08.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:08 vm05 bash[17520]: cluster 2026-03-10T07:18:06.635793+0000 mgr.vm05.wnsmpp (mgr.14195) 53 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:10.423 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:10 vm09 bash[21099]: cluster 2026-03-10T07:18:08.635988+0000 mgr.vm05.wnsmpp (mgr.14195) 54 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:10.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:10 vm09 bash[21099]: cluster 2026-03-10T07:18:08.635988+0000 mgr.vm05.wnsmpp (mgr.14195) 54 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:10.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:10 vm05 bash[17520]: cluster 2026-03-10T07:18:08.635988+0000 mgr.vm05.wnsmpp (mgr.14195) 54 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:10.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:10 vm05 bash[17520]: cluster 2026-03-10T07:18:08.635988+0000 mgr.vm05.wnsmpp (mgr.14195) 54 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:12.378 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm09/config 2026-03-10T07:18:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: cluster 2026-03-10T07:18:10.636198+0000 mgr.vm05.wnsmpp (mgr.14195) 55 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: cluster 2026-03-10T07:18:10.636198+0000 mgr.vm05.wnsmpp (mgr.14195) 55 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.200459+0000 mon.vm05 (mon.0) 328 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.200459+0000 mon.vm05 (mon.0) 328 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.205245+0000 mon.vm05 (mon.0) 329 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.205245+0000 mon.vm05 (mon.0) 329 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.608442+0000 mon.vm05 (mon.0) 330 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.608442+0000 mon.vm05 (mon.0) 330 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.613997+0000 mon.vm05 (mon.0) 331 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.613997+0000 mon.vm05 (mon.0) 331 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.614887+0000 mon.vm05 (mon.0) 332 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.614887+0000 mon.vm05 (mon.0) 332 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:12.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.615418+0000 mon.vm05 (mon.0) 333 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:18:12.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.615418+0000 mon.vm05 (mon.0) 333 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:18:12.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.619343+0000 mon.vm05 (mon.0) 334 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:12 vm05 bash[17520]: audit 2026-03-10T07:18:11.619343+0000 mon.vm05 (mon.0) 334 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.464 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: cluster 2026-03-10T07:18:10.636198+0000 mgr.vm05.wnsmpp (mgr.14195) 55 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:12.464 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: cluster 2026-03-10T07:18:10.636198+0000 mgr.vm05.wnsmpp (mgr.14195) 55 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:12.464 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.200459+0000 mon.vm05 (mon.0) 328 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.200459+0000 mon.vm05 (mon.0) 328 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.205245+0000 mon.vm05 (mon.0) 329 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.205245+0000 mon.vm05 (mon.0) 329 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.608442+0000 mon.vm05 (mon.0) 330 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.608442+0000 mon.vm05 (mon.0) 330 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.613997+0000 mon.vm05 (mon.0) 331 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.613997+0000 mon.vm05 (mon.0) 331 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.614887+0000 mon.vm05 (mon.0) 332 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.614887+0000 mon.vm05 (mon.0) 332 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.615418+0000 mon.vm05 (mon.0) 333 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.615418+0000 mon.vm05 (mon.0) 333 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.619343+0000 mon.vm05 (mon.0) 334 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.465 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:12 vm09 bash[21099]: audit 2026-03-10T07:18:11.619343+0000 mon.vm05 (mon.0) 334 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:12.644 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled osd.all-available-devices update... 2026-03-10T07:18:12.736 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-10T07:18:12.736 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd stat -f json 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: cluster 2026-03-10T07:18:12.636411+0000 mgr.vm05.wnsmpp (mgr.14195) 56 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: cluster 2026-03-10T07:18:12.636411+0000 mgr.vm05.wnsmpp (mgr.14195) 56 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: audit 2026-03-10T07:18:12.638690+0000 mgr.vm05.wnsmpp (mgr.14195) 57 : audit [DBG] from='client.24105 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: audit 2026-03-10T07:18:12.638690+0000 mgr.vm05.wnsmpp (mgr.14195) 57 : audit [DBG] from='client.24105 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: cephadm 2026-03-10T07:18:12.640128+0000 mgr.vm05.wnsmpp (mgr.14195) 58 : cephadm [INF] Marking host: vm05 for OSDSpec preview refresh. 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: cephadm 2026-03-10T07:18:12.640128+0000 mgr.vm05.wnsmpp (mgr.14195) 58 : cephadm [INF] Marking host: vm05 for OSDSpec preview refresh. 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: cephadm 2026-03-10T07:18:12.640162+0000 mgr.vm05.wnsmpp (mgr.14195) 59 : cephadm [INF] Marking host: vm09 for OSDSpec preview refresh. 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: cephadm 2026-03-10T07:18:12.640162+0000 mgr.vm05.wnsmpp (mgr.14195) 59 : cephadm [INF] Marking host: vm09 for OSDSpec preview refresh. 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: cephadm 2026-03-10T07:18:12.640383+0000 mgr.vm05.wnsmpp (mgr.14195) 60 : cephadm [INF] Saving service osd.all-available-devices spec with placement * 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: cephadm 2026-03-10T07:18:12.640383+0000 mgr.vm05.wnsmpp (mgr.14195) 60 : cephadm [INF] Saving service osd.all-available-devices spec with placement * 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: audit 2026-03-10T07:18:12.645041+0000 mon.vm05 (mon.0) 335 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: audit 2026-03-10T07:18:12.645041+0000 mon.vm05 (mon.0) 335 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: audit 2026-03-10T07:18:12.645807+0000 mon.vm05 (mon.0) 336 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: audit 2026-03-10T07:18:12.645807+0000 mon.vm05 (mon.0) 336 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: audit 2026-03-10T07:18:12.672865+0000 mon.vm05 (mon.0) 337 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:13.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:13 vm09 bash[21099]: audit 2026-03-10T07:18:12.672865+0000 mon.vm05 (mon.0) 337 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:13.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: cluster 2026-03-10T07:18:12.636411+0000 mgr.vm05.wnsmpp (mgr.14195) 56 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:13.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: cluster 2026-03-10T07:18:12.636411+0000 mgr.vm05.wnsmpp (mgr.14195) 56 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:13.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: audit 2026-03-10T07:18:12.638690+0000 mgr.vm05.wnsmpp (mgr.14195) 57 : audit [DBG] from='client.24105 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:18:13.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: audit 2026-03-10T07:18:12.638690+0000 mgr.vm05.wnsmpp (mgr.14195) 57 : audit [DBG] from='client.24105 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:18:13.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: cephadm 2026-03-10T07:18:12.640128+0000 mgr.vm05.wnsmpp (mgr.14195) 58 : cephadm [INF] Marking host: vm05 for OSDSpec preview refresh. 2026-03-10T07:18:13.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: cephadm 2026-03-10T07:18:12.640128+0000 mgr.vm05.wnsmpp (mgr.14195) 58 : cephadm [INF] Marking host: vm05 for OSDSpec preview refresh. 2026-03-10T07:18:13.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: cephadm 2026-03-10T07:18:12.640162+0000 mgr.vm05.wnsmpp (mgr.14195) 59 : cephadm [INF] Marking host: vm09 for OSDSpec preview refresh. 2026-03-10T07:18:13.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: cephadm 2026-03-10T07:18:12.640162+0000 mgr.vm05.wnsmpp (mgr.14195) 59 : cephadm [INF] Marking host: vm09 for OSDSpec preview refresh. 2026-03-10T07:18:13.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: cephadm 2026-03-10T07:18:12.640383+0000 mgr.vm05.wnsmpp (mgr.14195) 60 : cephadm [INF] Saving service osd.all-available-devices spec with placement * 2026-03-10T07:18:13.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: cephadm 2026-03-10T07:18:12.640383+0000 mgr.vm05.wnsmpp (mgr.14195) 60 : cephadm [INF] Saving service osd.all-available-devices spec with placement * 2026-03-10T07:18:13.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: audit 2026-03-10T07:18:12.645041+0000 mon.vm05 (mon.0) 335 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:13.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: audit 2026-03-10T07:18:12.645041+0000 mon.vm05 (mon.0) 335 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:13.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: audit 2026-03-10T07:18:12.645807+0000 mon.vm05 (mon.0) 336 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:18:13.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: audit 2026-03-10T07:18:12.645807+0000 mon.vm05 (mon.0) 336 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:18:13.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: audit 2026-03-10T07:18:12.672865+0000 mon.vm05 (mon.0) 337 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:13.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:13 vm05 bash[17520]: audit 2026-03-10T07:18:12.672865+0000 mon.vm05 (mon.0) 337 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:16.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:15 vm09 bash[21099]: cluster 2026-03-10T07:18:14.636705+0000 mgr.vm05.wnsmpp (mgr.14195) 61 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:16.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:15 vm09 bash[21099]: cluster 2026-03-10T07:18:14.636705+0000 mgr.vm05.wnsmpp (mgr.14195) 61 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:16.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:15 vm05 bash[17520]: cluster 2026-03-10T07:18:14.636705+0000 mgr.vm05.wnsmpp (mgr.14195) 61 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:16.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:15 vm05 bash[17520]: cluster 2026-03-10T07:18:14.636705+0000 mgr.vm05.wnsmpp (mgr.14195) 61 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:17.369 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:18:17.662 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:18:17.733 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T07:18:17.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:17 vm05 bash[17520]: cluster 2026-03-10T07:18:16.636946+0000 mgr.vm05.wnsmpp (mgr.14195) 62 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:17.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:17 vm05 bash[17520]: cluster 2026-03-10T07:18:16.636946+0000 mgr.vm05.wnsmpp (mgr.14195) 62 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:17.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:17 vm05 bash[17520]: audit 2026-03-10T07:18:17.662032+0000 mon.vm05 (mon.0) 338 : audit [DBG] from='client.? 192.168.123.105:0/1235409166' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:17.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:17 vm05 bash[17520]: audit 2026-03-10T07:18:17.662032+0000 mon.vm05 (mon.0) 338 : audit [DBG] from='client.? 192.168.123.105:0/1235409166' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:17.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:17 vm05 bash[17520]: audit 2026-03-10T07:18:17.698278+0000 mon.vm05 (mon.0) 339 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:17.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:17 vm05 bash[17520]: audit 2026-03-10T07:18:17.698278+0000 mon.vm05 (mon.0) 339 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:17.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:17 vm05 bash[17520]: audit 2026-03-10T07:18:17.703884+0000 mon.vm05 (mon.0) 340 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:17.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:17 vm05 bash[17520]: audit 2026-03-10T07:18:17.703884+0000 mon.vm05 (mon.0) 340 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:17.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:17 vm05 bash[17520]: audit 2026-03-10T07:18:17.711676+0000 mon.vm05 (mon.0) 341 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:17.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:17 vm05 bash[17520]: audit 2026-03-10T07:18:17.711676+0000 mon.vm05 (mon.0) 341 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:18.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:17 vm09 bash[21099]: cluster 2026-03-10T07:18:16.636946+0000 mgr.vm05.wnsmpp (mgr.14195) 62 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:17 vm09 bash[21099]: cluster 2026-03-10T07:18:16.636946+0000 mgr.vm05.wnsmpp (mgr.14195) 62 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:17 vm09 bash[21099]: audit 2026-03-10T07:18:17.662032+0000 mon.vm05 (mon.0) 338 : audit [DBG] from='client.? 192.168.123.105:0/1235409166' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:17 vm09 bash[21099]: audit 2026-03-10T07:18:17.662032+0000 mon.vm05 (mon.0) 338 : audit [DBG] from='client.? 192.168.123.105:0/1235409166' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:17 vm09 bash[21099]: audit 2026-03-10T07:18:17.698278+0000 mon.vm05 (mon.0) 339 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:17 vm09 bash[21099]: audit 2026-03-10T07:18:17.698278+0000 mon.vm05 (mon.0) 339 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:17 vm09 bash[21099]: audit 2026-03-10T07:18:17.703884+0000 mon.vm05 (mon.0) 340 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:17 vm09 bash[21099]: audit 2026-03-10T07:18:17.703884+0000 mon.vm05 (mon.0) 340 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:17 vm09 bash[21099]: audit 2026-03-10T07:18:17.711676+0000 mon.vm05 (mon.0) 341 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:17 vm09 bash[21099]: audit 2026-03-10T07:18:17.711676+0000 mon.vm05 (mon.0) 341 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:18.734 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd stat -f json 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:17.729646+0000 mon.vm05 (mon.0) 342 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:17.729646+0000 mon.vm05 (mon.0) 342 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:17.999591+0000 mon.vm05 (mon.0) 343 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:17.999591+0000 mon.vm05 (mon.0) 343 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.004369+0000 mon.vm05 (mon.0) 344 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.004369+0000 mon.vm05 (mon.0) 344 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.009458+0000 mon.vm05 (mon.0) 345 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.009458+0000 mon.vm05 (mon.0) 345 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.013867+0000 mon.vm05 (mon.0) 346 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.013867+0000 mon.vm05 (mon.0) 346 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.014784+0000 mon.vm05 (mon.0) 347 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.014784+0000 mon.vm05 (mon.0) 347 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.015521+0000 mon.vm05 (mon.0) 348 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.015521+0000 mon.vm05 (mon.0) 348 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.019477+0000 mon.vm05 (mon.0) 349 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.019477+0000 mon.vm05 (mon.0) 349 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.021114+0000 mon.vm05 (mon.0) 350 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.021114+0000 mon.vm05 (mon.0) 350 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.022994+0000 mon.vm05 (mon.0) 351 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.022994+0000 mon.vm05 (mon.0) 351 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.023669+0000 mon.vm05 (mon.0) 352 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.023669+0000 mon.vm05 (mon.0) 352 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.025322+0000 mon.vm05 (mon.0) 353 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.025322+0000 mon.vm05 (mon.0) 353 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.026011+0000 mon.vm05 (mon.0) 354 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:19.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:18 vm09 bash[21099]: audit 2026-03-10T07:18:18.026011+0000 mon.vm05 (mon.0) 354 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:17.729646+0000 mon.vm05 (mon.0) 342 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:17.729646+0000 mon.vm05 (mon.0) 342 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:17.999591+0000 mon.vm05 (mon.0) 343 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:17.999591+0000 mon.vm05 (mon.0) 343 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.004369+0000 mon.vm05 (mon.0) 344 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.004369+0000 mon.vm05 (mon.0) 344 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.009458+0000 mon.vm05 (mon.0) 345 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.009458+0000 mon.vm05 (mon.0) 345 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.013867+0000 mon.vm05 (mon.0) 346 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.013867+0000 mon.vm05 (mon.0) 346 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.014784+0000 mon.vm05 (mon.0) 347 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.014784+0000 mon.vm05 (mon.0) 347 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.015521+0000 mon.vm05 (mon.0) 348 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:18:19.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.015521+0000 mon.vm05 (mon.0) 348 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:18:19.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.019477+0000 mon.vm05 (mon.0) 349 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.019477+0000 mon.vm05 (mon.0) 349 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:19.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.021114+0000 mon.vm05 (mon.0) 350 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:18:19.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.021114+0000 mon.vm05 (mon.0) 350 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:18:19.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.022994+0000 mon.vm05 (mon.0) 351 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:18:19.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.022994+0000 mon.vm05 (mon.0) 351 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:18:19.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.023669+0000 mon.vm05 (mon.0) 352 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:19.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.023669+0000 mon.vm05 (mon.0) 352 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:19.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.025322+0000 mon.vm05 (mon.0) 353 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:18:19.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.025322+0000 mon.vm05 (mon.0) 353 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:18:19.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.026011+0000 mon.vm05 (mon.0) 354 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:19.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:18 vm05 bash[17520]: audit 2026-03-10T07:18:18.026011+0000 mon.vm05 (mon.0) 354 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:20.423 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:20 vm09 bash[21099]: cluster 2026-03-10T07:18:18.637142+0000 mgr.vm05.wnsmpp (mgr.14195) 63 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:20.423 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:20 vm09 bash[21099]: cluster 2026-03-10T07:18:18.637142+0000 mgr.vm05.wnsmpp (mgr.14195) 63 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:20.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:20 vm05 bash[17520]: cluster 2026-03-10T07:18:18.637142+0000 mgr.vm05.wnsmpp (mgr.14195) 63 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:20.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:20 vm05 bash[17520]: cluster 2026-03-10T07:18:18.637142+0000 mgr.vm05.wnsmpp (mgr.14195) 63 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:22.423 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:22 vm09 bash[21099]: cluster 2026-03-10T07:18:20.637379+0000 mgr.vm05.wnsmpp (mgr.14195) 64 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:22.423 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:22 vm09 bash[21099]: cluster 2026-03-10T07:18:20.637379+0000 mgr.vm05.wnsmpp (mgr.14195) 64 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:22.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:22 vm05 bash[17520]: cluster 2026-03-10T07:18:20.637379+0000 mgr.vm05.wnsmpp (mgr.14195) 64 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:22.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:22 vm05 bash[17520]: cluster 2026-03-10T07:18:20.637379+0000 mgr.vm05.wnsmpp (mgr.14195) 64 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:22.822 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:18:23.125 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:18:23.186 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T07:18:24.187 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd stat -f json 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: cluster 2026-03-10T07:18:22.637553+0000 mgr.vm05.wnsmpp (mgr.14195) 65 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: cluster 2026-03-10T07:18:22.637553+0000 mgr.vm05.wnsmpp (mgr.14195) 65 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: audit 2026-03-10T07:18:23.125923+0000 mon.vm05 (mon.0) 355 : audit [DBG] from='client.? 192.168.123.105:0/3382598562' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: audit 2026-03-10T07:18:23.125923+0000 mon.vm05 (mon.0) 355 : audit [DBG] from='client.? 192.168.123.105:0/3382598562' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: audit 2026-03-10T07:18:23.860504+0000 mon.vm09 (mon.1) 2 : audit [INF] from='client.? 192.168.123.109:0/321079014' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39"}]: dispatch 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: audit 2026-03-10T07:18:23.860504+0000 mon.vm09 (mon.1) 2 : audit [INF] from='client.? 192.168.123.109:0/321079014' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39"}]: dispatch 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: audit 2026-03-10T07:18:23.864955+0000 mon.vm05 (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39"}]: dispatch 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: audit 2026-03-10T07:18:23.864955+0000 mon.vm05 (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39"}]: dispatch 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: audit 2026-03-10T07:18:23.867940+0000 mon.vm05 (mon.0) 357 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39"}]': finished 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: audit 2026-03-10T07:18:23.867940+0000 mon.vm05 (mon.0) 357 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39"}]': finished 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: cluster 2026-03-10T07:18:23.869799+0000 mon.vm05 (mon.0) 358 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: cluster 2026-03-10T07:18:23.869799+0000 mon.vm05 (mon.0) 358 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: audit 2026-03-10T07:18:23.870105+0000 mon.vm05 (mon.0) 359 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:24.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:24 vm09 bash[21099]: audit 2026-03-10T07:18:23.870105+0000 mon.vm05 (mon.0) 359 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:24.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: cluster 2026-03-10T07:18:22.637553+0000 mgr.vm05.wnsmpp (mgr.14195) 65 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:24.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: cluster 2026-03-10T07:18:22.637553+0000 mgr.vm05.wnsmpp (mgr.14195) 65 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:24.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: audit 2026-03-10T07:18:23.125923+0000 mon.vm05 (mon.0) 355 : audit [DBG] from='client.? 192.168.123.105:0/3382598562' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:24.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: audit 2026-03-10T07:18:23.125923+0000 mon.vm05 (mon.0) 355 : audit [DBG] from='client.? 192.168.123.105:0/3382598562' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:24.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: audit 2026-03-10T07:18:23.860504+0000 mon.vm09 (mon.1) 2 : audit [INF] from='client.? 192.168.123.109:0/321079014' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39"}]: dispatch 2026-03-10T07:18:24.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: audit 2026-03-10T07:18:23.860504+0000 mon.vm09 (mon.1) 2 : audit [INF] from='client.? 192.168.123.109:0/321079014' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39"}]: dispatch 2026-03-10T07:18:24.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: audit 2026-03-10T07:18:23.864955+0000 mon.vm05 (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39"}]: dispatch 2026-03-10T07:18:24.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: audit 2026-03-10T07:18:23.864955+0000 mon.vm05 (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39"}]: dispatch 2026-03-10T07:18:24.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: audit 2026-03-10T07:18:23.867940+0000 mon.vm05 (mon.0) 357 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39"}]': finished 2026-03-10T07:18:24.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: audit 2026-03-10T07:18:23.867940+0000 mon.vm05 (mon.0) 357 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39"}]': finished 2026-03-10T07:18:24.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: cluster 2026-03-10T07:18:23.869799+0000 mon.vm05 (mon.0) 358 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T07:18:24.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: cluster 2026-03-10T07:18:23.869799+0000 mon.vm05 (mon.0) 358 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T07:18:24.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: audit 2026-03-10T07:18:23.870105+0000 mon.vm05 (mon.0) 359 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:24.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:24 vm05 bash[17520]: audit 2026-03-10T07:18:23.870105+0000 mon.vm05 (mon.0) 359 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: audit 2026-03-10T07:18:23.985511+0000 mon.vm05 (mon.0) 360 : audit [INF] from='client.? 192.168.123.105:0/3554698269' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "165a1577-c628-4924-8467-6ee181e4ae8f"}]: dispatch 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: audit 2026-03-10T07:18:23.985511+0000 mon.vm05 (mon.0) 360 : audit [INF] from='client.? 192.168.123.105:0/3554698269' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "165a1577-c628-4924-8467-6ee181e4ae8f"}]: dispatch 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: audit 2026-03-10T07:18:24.077075+0000 mon.vm05 (mon.0) 361 : audit [INF] from='client.? 192.168.123.105:0/3554698269' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "165a1577-c628-4924-8467-6ee181e4ae8f"}]': finished 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: audit 2026-03-10T07:18:24.077075+0000 mon.vm05 (mon.0) 361 : audit [INF] from='client.? 192.168.123.105:0/3554698269' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "165a1577-c628-4924-8467-6ee181e4ae8f"}]': finished 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: cluster 2026-03-10T07:18:24.079458+0000 mon.vm05 (mon.0) 362 : cluster [DBG] osdmap e7: 2 total, 0 up, 2 in 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: cluster 2026-03-10T07:18:24.079458+0000 mon.vm05 (mon.0) 362 : cluster [DBG] osdmap e7: 2 total, 0 up, 2 in 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: audit 2026-03-10T07:18:24.080109+0000 mon.vm05 (mon.0) 363 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: audit 2026-03-10T07:18:24.080109+0000 mon.vm05 (mon.0) 363 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: audit 2026-03-10T07:18:24.080267+0000 mon.vm05 (mon.0) 364 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: audit 2026-03-10T07:18:24.080267+0000 mon.vm05 (mon.0) 364 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: audit 2026-03-10T07:18:24.514951+0000 mon.vm09 (mon.1) 3 : audit [DBG] from='client.? 192.168.123.109:0/1120210459' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: audit 2026-03-10T07:18:24.514951+0000 mon.vm09 (mon.1) 3 : audit [DBG] from='client.? 192.168.123.109:0/1120210459' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: audit 2026-03-10T07:18:24.680984+0000 mon.vm05 (mon.0) 365 : audit [DBG] from='client.? 192.168.123.105:0/2346599322' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:25.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:25 vm09 bash[21099]: audit 2026-03-10T07:18:24.680984+0000 mon.vm05 (mon.0) 365 : audit [DBG] from='client.? 192.168.123.105:0/2346599322' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:25.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: audit 2026-03-10T07:18:23.985511+0000 mon.vm05 (mon.0) 360 : audit [INF] from='client.? 192.168.123.105:0/3554698269' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "165a1577-c628-4924-8467-6ee181e4ae8f"}]: dispatch 2026-03-10T07:18:25.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: audit 2026-03-10T07:18:23.985511+0000 mon.vm05 (mon.0) 360 : audit [INF] from='client.? 192.168.123.105:0/3554698269' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "165a1577-c628-4924-8467-6ee181e4ae8f"}]: dispatch 2026-03-10T07:18:25.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: audit 2026-03-10T07:18:24.077075+0000 mon.vm05 (mon.0) 361 : audit [INF] from='client.? 192.168.123.105:0/3554698269' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "165a1577-c628-4924-8467-6ee181e4ae8f"}]': finished 2026-03-10T07:18:25.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: audit 2026-03-10T07:18:24.077075+0000 mon.vm05 (mon.0) 361 : audit [INF] from='client.? 192.168.123.105:0/3554698269' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "165a1577-c628-4924-8467-6ee181e4ae8f"}]': finished 2026-03-10T07:18:25.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: cluster 2026-03-10T07:18:24.079458+0000 mon.vm05 (mon.0) 362 : cluster [DBG] osdmap e7: 2 total, 0 up, 2 in 2026-03-10T07:18:25.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: cluster 2026-03-10T07:18:24.079458+0000 mon.vm05 (mon.0) 362 : cluster [DBG] osdmap e7: 2 total, 0 up, 2 in 2026-03-10T07:18:25.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: audit 2026-03-10T07:18:24.080109+0000 mon.vm05 (mon.0) 363 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:25.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: audit 2026-03-10T07:18:24.080109+0000 mon.vm05 (mon.0) 363 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:25.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: audit 2026-03-10T07:18:24.080267+0000 mon.vm05 (mon.0) 364 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:25.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: audit 2026-03-10T07:18:24.080267+0000 mon.vm05 (mon.0) 364 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:25.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: audit 2026-03-10T07:18:24.514951+0000 mon.vm09 (mon.1) 3 : audit [DBG] from='client.? 192.168.123.109:0/1120210459' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:25.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: audit 2026-03-10T07:18:24.514951+0000 mon.vm09 (mon.1) 3 : audit [DBG] from='client.? 192.168.123.109:0/1120210459' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:25.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: audit 2026-03-10T07:18:24.680984+0000 mon.vm05 (mon.0) 365 : audit [DBG] from='client.? 192.168.123.105:0/2346599322' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:25.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:25 vm05 bash[17520]: audit 2026-03-10T07:18:24.680984+0000 mon.vm05 (mon.0) 365 : audit [DBG] from='client.? 192.168.123.105:0/2346599322' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:26.423 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:26 vm09 bash[21099]: cluster 2026-03-10T07:18:24.637776+0000 mgr.vm05.wnsmpp (mgr.14195) 66 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:26.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:26 vm09 bash[21099]: cluster 2026-03-10T07:18:24.637776+0000 mgr.vm05.wnsmpp (mgr.14195) 66 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:26.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:26 vm05 bash[17520]: cluster 2026-03-10T07:18:24.637776+0000 mgr.vm05.wnsmpp (mgr.14195) 66 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:26.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:26 vm05 bash[17520]: cluster 2026-03-10T07:18:24.637776+0000 mgr.vm05.wnsmpp (mgr.14195) 66 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: cluster 2026-03-10T07:18:26.638001+0000 mgr.vm05.wnsmpp (mgr.14195) 67 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: cluster 2026-03-10T07:18:26.638001+0000 mgr.vm05.wnsmpp (mgr.14195) 67 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.672943+0000 mon.vm05 (mon.0) 366 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.672943+0000 mon.vm05 (mon.0) 366 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.890785+0000 mon.vm09 (mon.1) 4 : audit [INF] from='client.? 192.168.123.109:0/2716822817' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f64d9f57-1660-4a5e-a3ad-5bb16faca664"}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.890785+0000 mon.vm09 (mon.1) 4 : audit [INF] from='client.? 192.168.123.109:0/2716822817' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f64d9f57-1660-4a5e-a3ad-5bb16faca664"}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.895395+0000 mon.vm05 (mon.0) 367 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f64d9f57-1660-4a5e-a3ad-5bb16faca664"}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.895395+0000 mon.vm05 (mon.0) 367 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f64d9f57-1660-4a5e-a3ad-5bb16faca664"}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.900864+0000 mon.vm05 (mon.0) 368 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f64d9f57-1660-4a5e-a3ad-5bb16faca664"}]': finished 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.900864+0000 mon.vm05 (mon.0) 368 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f64d9f57-1660-4a5e-a3ad-5bb16faca664"}]': finished 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: cluster 2026-03-10T07:18:27.903230+0000 mon.vm05 (mon.0) 369 : cluster [DBG] osdmap e8: 3 total, 0 up, 3 in 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: cluster 2026-03-10T07:18:27.903230+0000 mon.vm05 (mon.0) 369 : cluster [DBG] osdmap e8: 3 total, 0 up, 3 in 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.903363+0000 mon.vm05 (mon.0) 370 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.903363+0000 mon.vm05 (mon.0) 370 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.903446+0000 mon.vm05 (mon.0) 371 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.903446+0000 mon.vm05 (mon.0) 371 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.903697+0000 mon.vm05 (mon.0) 372 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:27.903697+0000 mon.vm05 (mon.0) 372 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:28.041594+0000 mon.vm05 (mon.0) 373 : audit [INF] from='client.? 192.168.123.105:0/2698406812' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "22a3ff7c-9910-4190-bf2f-45d16541f7ef"}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:28.041594+0000 mon.vm05 (mon.0) 373 : audit [INF] from='client.? 192.168.123.105:0/2698406812' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "22a3ff7c-9910-4190-bf2f-45d16541f7ef"}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:28.046260+0000 mon.vm05 (mon.0) 374 : audit [INF] from='client.? 192.168.123.105:0/2698406812' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "22a3ff7c-9910-4190-bf2f-45d16541f7ef"}]': finished 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:28.046260+0000 mon.vm05 (mon.0) 374 : audit [INF] from='client.? 192.168.123.105:0/2698406812' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "22a3ff7c-9910-4190-bf2f-45d16541f7ef"}]': finished 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: cluster 2026-03-10T07:18:28.048451+0000 mon.vm05 (mon.0) 375 : cluster [DBG] osdmap e9: 4 total, 0 up, 4 in 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: cluster 2026-03-10T07:18:28.048451+0000 mon.vm05 (mon.0) 375 : cluster [DBG] osdmap e9: 4 total, 0 up, 4 in 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:28.048883+0000 mon.vm05 (mon.0) 376 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:28.048883+0000 mon.vm05 (mon.0) 376 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:28.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:28.049411+0000 mon.vm05 (mon.0) 377 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:28.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:28.049411+0000 mon.vm05 (mon.0) 377 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:28.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:28.049817+0000 mon.vm05 (mon.0) 378 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:28.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:28.049817+0000 mon.vm05 (mon.0) 378 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:28.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:28.050390+0000 mon.vm05 (mon.0) 379 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:28.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:28 vm09 bash[21099]: audit 2026-03-10T07:18:28.050390+0000 mon.vm05 (mon.0) 379 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: cluster 2026-03-10T07:18:26.638001+0000 mgr.vm05.wnsmpp (mgr.14195) 67 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: cluster 2026-03-10T07:18:26.638001+0000 mgr.vm05.wnsmpp (mgr.14195) 67 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.672943+0000 mon.vm05 (mon.0) 366 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.672943+0000 mon.vm05 (mon.0) 366 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.890785+0000 mon.vm09 (mon.1) 4 : audit [INF] from='client.? 192.168.123.109:0/2716822817' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f64d9f57-1660-4a5e-a3ad-5bb16faca664"}]: dispatch 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.890785+0000 mon.vm09 (mon.1) 4 : audit [INF] from='client.? 192.168.123.109:0/2716822817' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f64d9f57-1660-4a5e-a3ad-5bb16faca664"}]: dispatch 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.895395+0000 mon.vm05 (mon.0) 367 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f64d9f57-1660-4a5e-a3ad-5bb16faca664"}]: dispatch 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.895395+0000 mon.vm05 (mon.0) 367 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f64d9f57-1660-4a5e-a3ad-5bb16faca664"}]: dispatch 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.900864+0000 mon.vm05 (mon.0) 368 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f64d9f57-1660-4a5e-a3ad-5bb16faca664"}]': finished 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.900864+0000 mon.vm05 (mon.0) 368 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f64d9f57-1660-4a5e-a3ad-5bb16faca664"}]': finished 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: cluster 2026-03-10T07:18:27.903230+0000 mon.vm05 (mon.0) 369 : cluster [DBG] osdmap e8: 3 total, 0 up, 3 in 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: cluster 2026-03-10T07:18:27.903230+0000 mon.vm05 (mon.0) 369 : cluster [DBG] osdmap e8: 3 total, 0 up, 3 in 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.903363+0000 mon.vm05 (mon.0) 370 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.903363+0000 mon.vm05 (mon.0) 370 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.903446+0000 mon.vm05 (mon.0) 371 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.903446+0000 mon.vm05 (mon.0) 371 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.903697+0000 mon.vm05 (mon.0) 372 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:27.903697+0000 mon.vm05 (mon.0) 372 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:28.041594+0000 mon.vm05 (mon.0) 373 : audit [INF] from='client.? 192.168.123.105:0/2698406812' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "22a3ff7c-9910-4190-bf2f-45d16541f7ef"}]: dispatch 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:28.041594+0000 mon.vm05 (mon.0) 373 : audit [INF] from='client.? 192.168.123.105:0/2698406812' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "22a3ff7c-9910-4190-bf2f-45d16541f7ef"}]: dispatch 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:28.046260+0000 mon.vm05 (mon.0) 374 : audit [INF] from='client.? 192.168.123.105:0/2698406812' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "22a3ff7c-9910-4190-bf2f-45d16541f7ef"}]': finished 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:28.046260+0000 mon.vm05 (mon.0) 374 : audit [INF] from='client.? 192.168.123.105:0/2698406812' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "22a3ff7c-9910-4190-bf2f-45d16541f7ef"}]': finished 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: cluster 2026-03-10T07:18:28.048451+0000 mon.vm05 (mon.0) 375 : cluster [DBG] osdmap e9: 4 total, 0 up, 4 in 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: cluster 2026-03-10T07:18:28.048451+0000 mon.vm05 (mon.0) 375 : cluster [DBG] osdmap e9: 4 total, 0 up, 4 in 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:28.048883+0000 mon.vm05 (mon.0) 376 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:28.048883+0000 mon.vm05 (mon.0) 376 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:28.049411+0000 mon.vm05 (mon.0) 377 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:28.049411+0000 mon.vm05 (mon.0) 377 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:28.049817+0000 mon.vm05 (mon.0) 378 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:28.049817+0000 mon.vm05 (mon.0) 378 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:28.050390+0000 mon.vm05 (mon.0) 379 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:28.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:28 vm05 bash[17520]: audit 2026-03-10T07:18:28.050390+0000 mon.vm05 (mon.0) 379 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:28.823 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:18:29.101 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:18:29.155 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":9,"num_osds":4,"num_up_osds":0,"osd_up_since":0,"num_in_osds":4,"osd_in_since":1773127108,"num_remapped_pgs":0} 2026-03-10T07:18:29.423 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:29 vm09 bash[21099]: audit 2026-03-10T07:18:28.488337+0000 mon.vm09 (mon.1) 5 : audit [DBG] from='client.? 192.168.123.109:0/2929793011' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:29.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:29 vm09 bash[21099]: audit 2026-03-10T07:18:28.488337+0000 mon.vm09 (mon.1) 5 : audit [DBG] from='client.? 192.168.123.109:0/2929793011' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:29.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:29 vm09 bash[21099]: audit 2026-03-10T07:18:28.660480+0000 mon.vm05 (mon.0) 380 : audit [DBG] from='client.? 192.168.123.105:0/3033228431' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:29.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:29 vm09 bash[21099]: audit 2026-03-10T07:18:28.660480+0000 mon.vm05 (mon.0) 380 : audit [DBG] from='client.? 192.168.123.105:0/3033228431' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:29.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:29 vm05 bash[17520]: audit 2026-03-10T07:18:28.488337+0000 mon.vm09 (mon.1) 5 : audit [DBG] from='client.? 192.168.123.109:0/2929793011' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:29.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:29 vm05 bash[17520]: audit 2026-03-10T07:18:28.488337+0000 mon.vm09 (mon.1) 5 : audit [DBG] from='client.? 192.168.123.109:0/2929793011' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:29.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:29 vm05 bash[17520]: audit 2026-03-10T07:18:28.660480+0000 mon.vm05 (mon.0) 380 : audit [DBG] from='client.? 192.168.123.105:0/3033228431' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:29.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:29 vm05 bash[17520]: audit 2026-03-10T07:18:28.660480+0000 mon.vm05 (mon.0) 380 : audit [DBG] from='client.? 192.168.123.105:0/3033228431' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:30.156 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd stat -f json 2026-03-10T07:18:30.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:30 vm09 bash[21099]: cluster 2026-03-10T07:18:28.638183+0000 mgr.vm05.wnsmpp (mgr.14195) 68 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:30.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:30 vm09 bash[21099]: cluster 2026-03-10T07:18:28.638183+0000 mgr.vm05.wnsmpp (mgr.14195) 68 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:30.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:30 vm09 bash[21099]: audit 2026-03-10T07:18:29.100403+0000 mon.vm05 (mon.0) 381 : audit [DBG] from='client.? 192.168.123.105:0/3330129576' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:30.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:30 vm09 bash[21099]: audit 2026-03-10T07:18:29.100403+0000 mon.vm05 (mon.0) 381 : audit [DBG] from='client.? 192.168.123.105:0/3330129576' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:30.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:30 vm05 bash[17520]: cluster 2026-03-10T07:18:28.638183+0000 mgr.vm05.wnsmpp (mgr.14195) 68 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:30.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:30 vm05 bash[17520]: cluster 2026-03-10T07:18:28.638183+0000 mgr.vm05.wnsmpp (mgr.14195) 68 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:30.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:30 vm05 bash[17520]: audit 2026-03-10T07:18:29.100403+0000 mon.vm05 (mon.0) 381 : audit [DBG] from='client.? 192.168.123.105:0/3330129576' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:30.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:30 vm05 bash[17520]: audit 2026-03-10T07:18:29.100403+0000 mon.vm05 (mon.0) 381 : audit [DBG] from='client.? 192.168.123.105:0/3330129576' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:32.181 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: cluster 2026-03-10T07:18:30.638415+0000 mgr.vm05.wnsmpp (mgr.14195) 69 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:32.181 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: cluster 2026-03-10T07:18:30.638415+0000 mgr.vm05.wnsmpp (mgr.14195) 69 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.833829+0000 mon.vm09 (mon.1) 6 : audit [INF] from='client.? 192.168.123.109:0/322133992' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a4ecd7d6-8367-42a2-ab73-88c375ccde3b"}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.833829+0000 mon.vm09 (mon.1) 6 : audit [INF] from='client.? 192.168.123.109:0/322133992' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a4ecd7d6-8367-42a2-ab73-88c375ccde3b"}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.838210+0000 mon.vm05 (mon.0) 382 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a4ecd7d6-8367-42a2-ab73-88c375ccde3b"}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.838210+0000 mon.vm05 (mon.0) 382 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a4ecd7d6-8367-42a2-ab73-88c375ccde3b"}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.840589+0000 mon.vm05 (mon.0) 383 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a4ecd7d6-8367-42a2-ab73-88c375ccde3b"}]': finished 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.840589+0000 mon.vm05 (mon.0) 383 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a4ecd7d6-8367-42a2-ab73-88c375ccde3b"}]': finished 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: cluster 2026-03-10T07:18:31.842611+0000 mon.vm05 (mon.0) 384 : cluster [DBG] osdmap e10: 5 total, 0 up, 5 in 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: cluster 2026-03-10T07:18:31.842611+0000 mon.vm05 (mon.0) 384 : cluster [DBG] osdmap e10: 5 total, 0 up, 5 in 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.842899+0000 mon.vm05 (mon.0) 385 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.842899+0000 mon.vm05 (mon.0) 385 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.843010+0000 mon.vm05 (mon.0) 386 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.843010+0000 mon.vm05 (mon.0) 386 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.843159+0000 mon.vm05 (mon.0) 387 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.843159+0000 mon.vm05 (mon.0) 387 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.843259+0000 mon.vm05 (mon.0) 388 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.843259+0000 mon.vm05 (mon.0) 388 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.843381+0000 mon.vm05 (mon.0) 389 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:31.843381+0000 mon.vm05 (mon.0) 389 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:32.182 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.017845+0000 mon.vm05 (mon.0) 390 : audit [INF] from='client.? 192.168.123.105:0/2865894366' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1d064a57-509f-4d38-a4f5-0eded18ac3cd"}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: cluster 2026-03-10T07:18:30.638415+0000 mgr.vm05.wnsmpp (mgr.14195) 69 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: cluster 2026-03-10T07:18:30.638415+0000 mgr.vm05.wnsmpp (mgr.14195) 69 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.833829+0000 mon.vm09 (mon.1) 6 : audit [INF] from='client.? 192.168.123.109:0/322133992' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a4ecd7d6-8367-42a2-ab73-88c375ccde3b"}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.833829+0000 mon.vm09 (mon.1) 6 : audit [INF] from='client.? 192.168.123.109:0/322133992' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a4ecd7d6-8367-42a2-ab73-88c375ccde3b"}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.838210+0000 mon.vm05 (mon.0) 382 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a4ecd7d6-8367-42a2-ab73-88c375ccde3b"}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.838210+0000 mon.vm05 (mon.0) 382 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a4ecd7d6-8367-42a2-ab73-88c375ccde3b"}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.840589+0000 mon.vm05 (mon.0) 383 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a4ecd7d6-8367-42a2-ab73-88c375ccde3b"}]': finished 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.840589+0000 mon.vm05 (mon.0) 383 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a4ecd7d6-8367-42a2-ab73-88c375ccde3b"}]': finished 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: cluster 2026-03-10T07:18:31.842611+0000 mon.vm05 (mon.0) 384 : cluster [DBG] osdmap e10: 5 total, 0 up, 5 in 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: cluster 2026-03-10T07:18:31.842611+0000 mon.vm05 (mon.0) 384 : cluster [DBG] osdmap e10: 5 total, 0 up, 5 in 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.842899+0000 mon.vm05 (mon.0) 385 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.842899+0000 mon.vm05 (mon.0) 385 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.843010+0000 mon.vm05 (mon.0) 386 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.843010+0000 mon.vm05 (mon.0) 386 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.843159+0000 mon.vm05 (mon.0) 387 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.843159+0000 mon.vm05 (mon.0) 387 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.843259+0000 mon.vm05 (mon.0) 388 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.843259+0000 mon.vm05 (mon.0) 388 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.843381+0000 mon.vm05 (mon.0) 389 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:31.843381+0000 mon.vm05 (mon.0) 389 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.017845+0000 mon.vm05 (mon.0) 390 : audit [INF] from='client.? 192.168.123.105:0/2865894366' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1d064a57-509f-4d38-a4f5-0eded18ac3cd"}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.017845+0000 mon.vm05 (mon.0) 390 : audit [INF] from='client.? 192.168.123.105:0/2865894366' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1d064a57-509f-4d38-a4f5-0eded18ac3cd"}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.020735+0000 mon.vm05 (mon.0) 391 : audit [INF] from='client.? 192.168.123.105:0/2865894366' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1d064a57-509f-4d38-a4f5-0eded18ac3cd"}]': finished 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.020735+0000 mon.vm05 (mon.0) 391 : audit [INF] from='client.? 192.168.123.105:0/2865894366' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1d064a57-509f-4d38-a4f5-0eded18ac3cd"}]': finished 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: cluster 2026-03-10T07:18:32.023149+0000 mon.vm05 (mon.0) 392 : cluster [DBG] osdmap e11: 6 total, 0 up, 6 in 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: cluster 2026-03-10T07:18:32.023149+0000 mon.vm05 (mon.0) 392 : cluster [DBG] osdmap e11: 6 total, 0 up, 6 in 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.023435+0000 mon.vm05 (mon.0) 393 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.023435+0000 mon.vm05 (mon.0) 393 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.023588+0000 mon.vm05 (mon.0) 394 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.023588+0000 mon.vm05 (mon.0) 394 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.023729+0000 mon.vm05 (mon.0) 395 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.023729+0000 mon.vm05 (mon.0) 395 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.023858+0000 mon.vm05 (mon.0) 396 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.023858+0000 mon.vm05 (mon.0) 396 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.023993+0000 mon.vm05 (mon.0) 397 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.023993+0000 mon.vm05 (mon.0) 397 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.024141+0000 mon.vm05 (mon.0) 398 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:32.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:32 vm05 bash[17520]: audit 2026-03-10T07:18:32.024141+0000 mon.vm05 (mon.0) 398 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.017845+0000 mon.vm05 (mon.0) 390 : audit [INF] from='client.? 192.168.123.105:0/2865894366' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1d064a57-509f-4d38-a4f5-0eded18ac3cd"}]: dispatch 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.020735+0000 mon.vm05 (mon.0) 391 : audit [INF] from='client.? 192.168.123.105:0/2865894366' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1d064a57-509f-4d38-a4f5-0eded18ac3cd"}]': finished 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.020735+0000 mon.vm05 (mon.0) 391 : audit [INF] from='client.? 192.168.123.105:0/2865894366' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1d064a57-509f-4d38-a4f5-0eded18ac3cd"}]': finished 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: cluster 2026-03-10T07:18:32.023149+0000 mon.vm05 (mon.0) 392 : cluster [DBG] osdmap e11: 6 total, 0 up, 6 in 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: cluster 2026-03-10T07:18:32.023149+0000 mon.vm05 (mon.0) 392 : cluster [DBG] osdmap e11: 6 total, 0 up, 6 in 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.023435+0000 mon.vm05 (mon.0) 393 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.023435+0000 mon.vm05 (mon.0) 393 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.023588+0000 mon.vm05 (mon.0) 394 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.023588+0000 mon.vm05 (mon.0) 394 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.023729+0000 mon.vm05 (mon.0) 395 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.023729+0000 mon.vm05 (mon.0) 395 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.023858+0000 mon.vm05 (mon.0) 396 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.023858+0000 mon.vm05 (mon.0) 396 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.023993+0000 mon.vm05 (mon.0) 397 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.023993+0000 mon.vm05 (mon.0) 397 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.024141+0000 mon.vm05 (mon.0) 398 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:32 vm09 bash[21099]: audit 2026-03-10T07:18:32.024141+0000 mon.vm05 (mon.0) 398 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:33.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:33 vm05 bash[17520]: audit 2026-03-10T07:18:32.436040+0000 mon.vm09 (mon.1) 7 : audit [DBG] from='client.? 192.168.123.109:0/2937875396' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:33.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:33 vm05 bash[17520]: audit 2026-03-10T07:18:32.436040+0000 mon.vm09 (mon.1) 7 : audit [DBG] from='client.? 192.168.123.109:0/2937875396' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:33.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:33 vm05 bash[17520]: audit 2026-03-10T07:18:32.622752+0000 mon.vm05 (mon.0) 399 : audit [DBG] from='client.? 192.168.123.105:0/4229961081' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:33.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:33 vm05 bash[17520]: audit 2026-03-10T07:18:32.622752+0000 mon.vm05 (mon.0) 399 : audit [DBG] from='client.? 192.168.123.105:0/4229961081' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:33.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:33 vm09 bash[21099]: audit 2026-03-10T07:18:32.436040+0000 mon.vm09 (mon.1) 7 : audit [DBG] from='client.? 192.168.123.109:0/2937875396' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:33.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:33 vm09 bash[21099]: audit 2026-03-10T07:18:32.436040+0000 mon.vm09 (mon.1) 7 : audit [DBG] from='client.? 192.168.123.109:0/2937875396' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:33.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:33 vm09 bash[21099]: audit 2026-03-10T07:18:32.622752+0000 mon.vm05 (mon.0) 399 : audit [DBG] from='client.? 192.168.123.105:0/4229961081' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:33.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:33 vm09 bash[21099]: audit 2026-03-10T07:18:32.622752+0000 mon.vm05 (mon.0) 399 : audit [DBG] from='client.? 192.168.123.105:0/4229961081' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:33.796 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:18:34.091 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:18:34.148 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":11,"num_osds":6,"num_up_osds":0,"osd_up_since":0,"num_in_osds":6,"osd_in_since":1773127112,"num_remapped_pgs":0} 2026-03-10T07:18:34.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:34 vm05 bash[17520]: cluster 2026-03-10T07:18:32.638634+0000 mgr.vm05.wnsmpp (mgr.14195) 70 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:34.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:34 vm05 bash[17520]: cluster 2026-03-10T07:18:32.638634+0000 mgr.vm05.wnsmpp (mgr.14195) 70 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:34.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:34 vm05 bash[17520]: audit 2026-03-10T07:18:34.092375+0000 mon.vm05 (mon.0) 400 : audit [DBG] from='client.? 192.168.123.105:0/2124403125' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:34.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:34 vm05 bash[17520]: audit 2026-03-10T07:18:34.092375+0000 mon.vm05 (mon.0) 400 : audit [DBG] from='client.? 192.168.123.105:0/2124403125' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:34.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:34 vm09 bash[21099]: cluster 2026-03-10T07:18:32.638634+0000 mgr.vm05.wnsmpp (mgr.14195) 70 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:34.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:34 vm09 bash[21099]: cluster 2026-03-10T07:18:32.638634+0000 mgr.vm05.wnsmpp (mgr.14195) 70 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:34.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:34 vm09 bash[21099]: audit 2026-03-10T07:18:34.092375+0000 mon.vm05 (mon.0) 400 : audit [DBG] from='client.? 192.168.123.105:0/2124403125' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:34.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:34 vm09 bash[21099]: audit 2026-03-10T07:18:34.092375+0000 mon.vm05 (mon.0) 400 : audit [DBG] from='client.? 192.168.123.105:0/2124403125' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:35.149 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd stat -f json 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: cluster 2026-03-10T07:18:34.638895+0000 mgr.vm05.wnsmpp (mgr.14195) 71 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: cluster 2026-03-10T07:18:34.638895+0000 mgr.vm05.wnsmpp (mgr.14195) 71 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.546174+0000 mon.vm09 (mon.1) 8 : audit [INF] from='client.? 192.168.123.109:0/4277953331' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0448ea07-efa1-439b-a742-4885c961ceee"}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.546174+0000 mon.vm09 (mon.1) 8 : audit [INF] from='client.? 192.168.123.109:0/4277953331' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0448ea07-efa1-439b-a742-4885c961ceee"}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.550794+0000 mon.vm05 (mon.0) 401 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0448ea07-efa1-439b-a742-4885c961ceee"}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.550794+0000 mon.vm05 (mon.0) 401 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0448ea07-efa1-439b-a742-4885c961ceee"}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.553773+0000 mon.vm05 (mon.0) 402 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0448ea07-efa1-439b-a742-4885c961ceee"}]': finished 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.553773+0000 mon.vm05 (mon.0) 402 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0448ea07-efa1-439b-a742-4885c961ceee"}]': finished 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: cluster 2026-03-10T07:18:35.556580+0000 mon.vm05 (mon.0) 403 : cluster [DBG] osdmap e12: 7 total, 0 up, 7 in 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: cluster 2026-03-10T07:18:35.556580+0000 mon.vm05 (mon.0) 403 : cluster [DBG] osdmap e12: 7 total, 0 up, 7 in 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.556744+0000 mon.vm05 (mon.0) 404 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.556744+0000 mon.vm05 (mon.0) 404 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.556912+0000 mon.vm05 (mon.0) 405 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.556912+0000 mon.vm05 (mon.0) 405 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.557064+0000 mon.vm05 (mon.0) 406 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.557064+0000 mon.vm05 (mon.0) 406 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.557238+0000 mon.vm05 (mon.0) 407 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.557238+0000 mon.vm05 (mon.0) 407 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.557371+0000 mon.vm05 (mon.0) 408 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.557371+0000 mon.vm05 (mon.0) 408 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.557496+0000 mon.vm05 (mon.0) 409 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.557496+0000 mon.vm05 (mon.0) 409 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.557651+0000 mon.vm05 (mon.0) 410 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:35.557651+0000 mon.vm05 (mon.0) 410 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:36.148696+0000 mon.vm09 (mon.1) 9 : audit [DBG] from='client.? 192.168.123.109:0/1825109240' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:36 vm09 bash[21099]: audit 2026-03-10T07:18:36.148696+0000 mon.vm09 (mon.1) 9 : audit [DBG] from='client.? 192.168.123.109:0/1825109240' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:36.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: cluster 2026-03-10T07:18:34.638895+0000 mgr.vm05.wnsmpp (mgr.14195) 71 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:36.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: cluster 2026-03-10T07:18:34.638895+0000 mgr.vm05.wnsmpp (mgr.14195) 71 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:36.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.546174+0000 mon.vm09 (mon.1) 8 : audit [INF] from='client.? 192.168.123.109:0/4277953331' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0448ea07-efa1-439b-a742-4885c961ceee"}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.546174+0000 mon.vm09 (mon.1) 8 : audit [INF] from='client.? 192.168.123.109:0/4277953331' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0448ea07-efa1-439b-a742-4885c961ceee"}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.550794+0000 mon.vm05 (mon.0) 401 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0448ea07-efa1-439b-a742-4885c961ceee"}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.550794+0000 mon.vm05 (mon.0) 401 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0448ea07-efa1-439b-a742-4885c961ceee"}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.553773+0000 mon.vm05 (mon.0) 402 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0448ea07-efa1-439b-a742-4885c961ceee"}]': finished 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.553773+0000 mon.vm05 (mon.0) 402 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0448ea07-efa1-439b-a742-4885c961ceee"}]': finished 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: cluster 2026-03-10T07:18:35.556580+0000 mon.vm05 (mon.0) 403 : cluster [DBG] osdmap e12: 7 total, 0 up, 7 in 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: cluster 2026-03-10T07:18:35.556580+0000 mon.vm05 (mon.0) 403 : cluster [DBG] osdmap e12: 7 total, 0 up, 7 in 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.556744+0000 mon.vm05 (mon.0) 404 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.556744+0000 mon.vm05 (mon.0) 404 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.556912+0000 mon.vm05 (mon.0) 405 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.556912+0000 mon.vm05 (mon.0) 405 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.557064+0000 mon.vm05 (mon.0) 406 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.557064+0000 mon.vm05 (mon.0) 406 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.557238+0000 mon.vm05 (mon.0) 407 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.557238+0000 mon.vm05 (mon.0) 407 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.557371+0000 mon.vm05 (mon.0) 408 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.557371+0000 mon.vm05 (mon.0) 408 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.557496+0000 mon.vm05 (mon.0) 409 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.557496+0000 mon.vm05 (mon.0) 409 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.557651+0000 mon.vm05 (mon.0) 410 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:35.557651+0000 mon.vm05 (mon.0) 410 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:36.148696+0000 mon.vm09 (mon.1) 9 : audit [DBG] from='client.? 192.168.123.109:0/1825109240' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:36 vm05 bash[17520]: audit 2026-03-10T07:18:36.148696+0000 mon.vm09 (mon.1) 9 : audit [DBG] from='client.? 192.168.123.109:0/1825109240' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.269398+0000 mon.vm05 (mon.0) 411 : audit [INF] from='client.? 192.168.123.105:0/3599592736' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27f4726-ebcc-445c-905f-5dd7d49f4c2e"}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.269398+0000 mon.vm05 (mon.0) 411 : audit [INF] from='client.? 192.168.123.105:0/3599592736' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27f4726-ebcc-445c-905f-5dd7d49f4c2e"}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.272693+0000 mon.vm05 (mon.0) 412 : audit [INF] from='client.? 192.168.123.105:0/3599592736' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a27f4726-ebcc-445c-905f-5dd7d49f4c2e"}]': finished 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.272693+0000 mon.vm05 (mon.0) 412 : audit [INF] from='client.? 192.168.123.105:0/3599592736' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a27f4726-ebcc-445c-905f-5dd7d49f4c2e"}]': finished 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: cluster 2026-03-10T07:18:36.275440+0000 mon.vm05 (mon.0) 413 : cluster [DBG] osdmap e13: 8 total, 0 up, 8 in 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: cluster 2026-03-10T07:18:36.275440+0000 mon.vm05 (mon.0) 413 : cluster [DBG] osdmap e13: 8 total, 0 up, 8 in 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.277322+0000 mon.vm05 (mon.0) 414 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.277322+0000 mon.vm05 (mon.0) 414 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.277828+0000 mon.vm05 (mon.0) 415 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.277828+0000 mon.vm05 (mon.0) 415 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.278335+0000 mon.vm05 (mon.0) 416 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.278335+0000 mon.vm05 (mon.0) 416 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.278848+0000 mon.vm05 (mon.0) 417 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.278848+0000 mon.vm05 (mon.0) 417 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.279383+0000 mon.vm05 (mon.0) 418 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.279383+0000 mon.vm05 (mon.0) 418 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.279885+0000 mon.vm05 (mon.0) 419 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.279885+0000 mon.vm05 (mon.0) 419 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.280417+0000 mon.vm05 (mon.0) 420 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.280417+0000 mon.vm05 (mon.0) 420 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.280838+0000 mon.vm05 (mon.0) 421 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.280838+0000 mon.vm05 (mon.0) 421 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.892250+0000 mon.vm05 (mon.0) 422 : audit [DBG] from='client.? 192.168.123.105:0/3999920224' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:37.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:37 vm09 bash[21099]: audit 2026-03-10T07:18:36.892250+0000 mon.vm05 (mon.0) 422 : audit [DBG] from='client.? 192.168.123.105:0/3999920224' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.269398+0000 mon.vm05 (mon.0) 411 : audit [INF] from='client.? 192.168.123.105:0/3599592736' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27f4726-ebcc-445c-905f-5dd7d49f4c2e"}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.269398+0000 mon.vm05 (mon.0) 411 : audit [INF] from='client.? 192.168.123.105:0/3599592736' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27f4726-ebcc-445c-905f-5dd7d49f4c2e"}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.272693+0000 mon.vm05 (mon.0) 412 : audit [INF] from='client.? 192.168.123.105:0/3599592736' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a27f4726-ebcc-445c-905f-5dd7d49f4c2e"}]': finished 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.272693+0000 mon.vm05 (mon.0) 412 : audit [INF] from='client.? 192.168.123.105:0/3599592736' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a27f4726-ebcc-445c-905f-5dd7d49f4c2e"}]': finished 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: cluster 2026-03-10T07:18:36.275440+0000 mon.vm05 (mon.0) 413 : cluster [DBG] osdmap e13: 8 total, 0 up, 8 in 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: cluster 2026-03-10T07:18:36.275440+0000 mon.vm05 (mon.0) 413 : cluster [DBG] osdmap e13: 8 total, 0 up, 8 in 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.277322+0000 mon.vm05 (mon.0) 414 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.277322+0000 mon.vm05 (mon.0) 414 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.277828+0000 mon.vm05 (mon.0) 415 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.277828+0000 mon.vm05 (mon.0) 415 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.278335+0000 mon.vm05 (mon.0) 416 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.278335+0000 mon.vm05 (mon.0) 416 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.278848+0000 mon.vm05 (mon.0) 417 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.278848+0000 mon.vm05 (mon.0) 417 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.279383+0000 mon.vm05 (mon.0) 418 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.279383+0000 mon.vm05 (mon.0) 418 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:37.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.279885+0000 mon.vm05 (mon.0) 419 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.279885+0000 mon.vm05 (mon.0) 419 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.280417+0000 mon.vm05 (mon.0) 420 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.280417+0000 mon.vm05 (mon.0) 420 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.280838+0000 mon.vm05 (mon.0) 421 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.280838+0000 mon.vm05 (mon.0) 421 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.892250+0000 mon.vm05 (mon.0) 422 : audit [DBG] from='client.? 192.168.123.105:0/3999920224' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:37 vm05 bash[17520]: audit 2026-03-10T07:18:36.892250+0000 mon.vm05 (mon.0) 422 : audit [DBG] from='client.? 192.168.123.105:0/3999920224' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:18:38.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:38 vm09 bash[21099]: cluster 2026-03-10T07:18:36.639139+0000 mgr.vm05.wnsmpp (mgr.14195) 72 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:38.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:38 vm09 bash[21099]: cluster 2026-03-10T07:18:36.639139+0000 mgr.vm05.wnsmpp (mgr.14195) 72 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:38.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:38 vm05 bash[17520]: cluster 2026-03-10T07:18:36.639139+0000 mgr.vm05.wnsmpp (mgr.14195) 72 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:38.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:38 vm05 bash[17520]: cluster 2026-03-10T07:18:36.639139+0000 mgr.vm05.wnsmpp (mgr.14195) 72 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:39.788 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:18:40.051 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:18:40.186 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773127116,"num_remapped_pgs":0} 2026-03-10T07:18:40.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:40 vm05 bash[17520]: cluster 2026-03-10T07:18:38.639339+0000 mgr.vm05.wnsmpp (mgr.14195) 73 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:40.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:40 vm05 bash[17520]: cluster 2026-03-10T07:18:38.639339+0000 mgr.vm05.wnsmpp (mgr.14195) 73 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:40.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:40 vm05 bash[17520]: audit 2026-03-10T07:18:40.052134+0000 mon.vm05 (mon.0) 423 : audit [DBG] from='client.? 192.168.123.105:0/942116256' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:40.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:40 vm05 bash[17520]: audit 2026-03-10T07:18:40.052134+0000 mon.vm05 (mon.0) 423 : audit [DBG] from='client.? 192.168.123.105:0/942116256' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:40.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:40 vm09 bash[21099]: cluster 2026-03-10T07:18:38.639339+0000 mgr.vm05.wnsmpp (mgr.14195) 73 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:40 vm09 bash[21099]: cluster 2026-03-10T07:18:38.639339+0000 mgr.vm05.wnsmpp (mgr.14195) 73 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:40 vm09 bash[21099]: audit 2026-03-10T07:18:40.052134+0000 mon.vm05 (mon.0) 423 : audit [DBG] from='client.? 192.168.123.105:0/942116256' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:40 vm09 bash[21099]: audit 2026-03-10T07:18:40.052134+0000 mon.vm05 (mon.0) 423 : audit [DBG] from='client.? 192.168.123.105:0/942116256' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:41.187 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd stat -f json 2026-03-10T07:18:42.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:42 vm09 bash[21099]: cluster 2026-03-10T07:18:40.639603+0000 mgr.vm05.wnsmpp (mgr.14195) 74 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:42.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:42 vm09 bash[21099]: cluster 2026-03-10T07:18:40.639603+0000 mgr.vm05.wnsmpp (mgr.14195) 74 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:42.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:42 vm05 bash[17520]: cluster 2026-03-10T07:18:40.639603+0000 mgr.vm05.wnsmpp (mgr.14195) 74 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:42.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:42 vm05 bash[17520]: cluster 2026-03-10T07:18:40.639603+0000 mgr.vm05.wnsmpp (mgr.14195) 74 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:43.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:43 vm09 bash[21099]: audit 2026-03-10T07:18:42.673313+0000 mon.vm05 (mon.0) 424 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:43.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:43 vm09 bash[21099]: audit 2026-03-10T07:18:42.673313+0000 mon.vm05 (mon.0) 424 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:43.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:43 vm05 bash[17520]: audit 2026-03-10T07:18:42.673313+0000 mon.vm05 (mon.0) 424 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:43.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:43 vm05 bash[17520]: audit 2026-03-10T07:18:42.673313+0000 mon.vm05 (mon.0) 424 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:44.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:44 vm05 bash[17520]: cluster 2026-03-10T07:18:42.639828+0000 mgr.vm05.wnsmpp (mgr.14195) 75 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:44.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:44 vm05 bash[17520]: cluster 2026-03-10T07:18:42.639828+0000 mgr.vm05.wnsmpp (mgr.14195) 75 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:44.782 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:44 vm09 bash[21099]: cluster 2026-03-10T07:18:42.639828+0000 mgr.vm05.wnsmpp (mgr.14195) 75 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:44.782 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:44 vm09 bash[21099]: cluster 2026-03-10T07:18:42.639828+0000 mgr.vm05.wnsmpp (mgr.14195) 75 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:44.958 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:18:45.565 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:18:45.668 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773127116,"num_remapped_pgs":0} 2026-03-10T07:18:46.258 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:46.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 bash[21099]: cluster 2026-03-10T07:18:44.640088+0000 mgr.vm05.wnsmpp (mgr.14195) 76 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:46.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 bash[21099]: cluster 2026-03-10T07:18:44.640088+0000 mgr.vm05.wnsmpp (mgr.14195) 76 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:46.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 bash[21099]: audit 2026-03-10T07:18:45.427865+0000 mon.vm05 (mon.0) 425 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T07:18:46.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 bash[21099]: audit 2026-03-10T07:18:45.427865+0000 mon.vm05 (mon.0) 425 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T07:18:46.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 bash[21099]: audit 2026-03-10T07:18:45.428407+0000 mon.vm05 (mon.0) 426 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:46.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 bash[21099]: audit 2026-03-10T07:18:45.428407+0000 mon.vm05 (mon.0) 426 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:46.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 bash[21099]: audit 2026-03-10T07:18:45.566368+0000 mon.vm05 (mon.0) 427 : audit [DBG] from='client.? 192.168.123.105:0/3432427111' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:46.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 bash[21099]: audit 2026-03-10T07:18:45.566368+0000 mon.vm05 (mon.0) 427 : audit [DBG] from='client.? 192.168.123.105:0/3432427111' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:46.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 bash[21099]: audit 2026-03-10T07:18:45.849949+0000 mon.vm05 (mon.0) 428 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T07:18:46.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 bash[21099]: audit 2026-03-10T07:18:45.849949+0000 mon.vm05 (mon.0) 428 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T07:18:46.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 bash[21099]: audit 2026-03-10T07:18:45.850796+0000 mon.vm05 (mon.0) 429 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:46.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 bash[21099]: audit 2026-03-10T07:18:45.850796+0000 mon.vm05 (mon.0) 429 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:46.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:46 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 bash[17520]: cluster 2026-03-10T07:18:44.640088+0000 mgr.vm05.wnsmpp (mgr.14195) 76 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 bash[17520]: cluster 2026-03-10T07:18:44.640088+0000 mgr.vm05.wnsmpp (mgr.14195) 76 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 bash[17520]: audit 2026-03-10T07:18:45.427865+0000 mon.vm05 (mon.0) 425 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 bash[17520]: audit 2026-03-10T07:18:45.427865+0000 mon.vm05 (mon.0) 425 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 bash[17520]: audit 2026-03-10T07:18:45.428407+0000 mon.vm05 (mon.0) 426 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 bash[17520]: audit 2026-03-10T07:18:45.428407+0000 mon.vm05 (mon.0) 426 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 bash[17520]: audit 2026-03-10T07:18:45.566368+0000 mon.vm05 (mon.0) 427 : audit [DBG] from='client.? 192.168.123.105:0/3432427111' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 bash[17520]: audit 2026-03-10T07:18:45.566368+0000 mon.vm05 (mon.0) 427 : audit [DBG] from='client.? 192.168.123.105:0/3432427111' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 bash[17520]: audit 2026-03-10T07:18:45.849949+0000 mon.vm05 (mon.0) 428 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 bash[17520]: audit 2026-03-10T07:18:45.849949+0000 mon.vm05 (mon.0) 428 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 bash[17520]: audit 2026-03-10T07:18:45.850796+0000 mon.vm05 (mon.0) 429 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 bash[17520]: audit 2026-03-10T07:18:45.850796+0000 mon.vm05 (mon.0) 429 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:46.666 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:46.668 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd stat -f json 2026-03-10T07:18:46.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:46 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:47.396 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: cephadm 2026-03-10T07:18:45.428831+0000 mgr.vm05.wnsmpp (mgr.14195) 77 : cephadm [INF] Deploying daemon osd.0 on vm09 2026-03-10T07:18:47.396 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: cephadm 2026-03-10T07:18:45.428831+0000 mgr.vm05.wnsmpp (mgr.14195) 77 : cephadm [INF] Deploying daemon osd.0 on vm09 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: cephadm 2026-03-10T07:18:45.851502+0000 mgr.vm05.wnsmpp (mgr.14195) 78 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: cephadm 2026-03-10T07:18:45.851502+0000 mgr.vm05.wnsmpp (mgr.14195) 78 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:46.505961+0000 mon.vm05 (mon.0) 430 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:46.505961+0000 mon.vm05 (mon.0) 430 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:46.517310+0000 mon.vm05 (mon.0) 431 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:46.517310+0000 mon.vm05 (mon.0) 431 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:46.519678+0000 mon.vm05 (mon.0) 432 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:46.519678+0000 mon.vm05 (mon.0) 432 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:46.523602+0000 mon.vm05 (mon.0) 433 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:46.523602+0000 mon.vm05 (mon.0) 433 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:47.129276+0000 mon.vm05 (mon.0) 434 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:47.129276+0000 mon.vm05 (mon.0) 434 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:47.141031+0000 mon.vm05 (mon.0) 435 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:47.141031+0000 mon.vm05 (mon.0) 435 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:47.141919+0000 mon.vm05 (mon.0) 436 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:47.141919+0000 mon.vm05 (mon.0) 436 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:47.144020+0000 mon.vm05 (mon.0) 437 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:47.397 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:47 vm05 bash[17520]: audit 2026-03-10T07:18:47.144020+0000 mon.vm05 (mon.0) 437 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:47.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: cephadm 2026-03-10T07:18:45.428831+0000 mgr.vm05.wnsmpp (mgr.14195) 77 : cephadm [INF] Deploying daemon osd.0 on vm09 2026-03-10T07:18:47.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: cephadm 2026-03-10T07:18:45.428831+0000 mgr.vm05.wnsmpp (mgr.14195) 77 : cephadm [INF] Deploying daemon osd.0 on vm09 2026-03-10T07:18:47.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: cephadm 2026-03-10T07:18:45.851502+0000 mgr.vm05.wnsmpp (mgr.14195) 78 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T07:18:47.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: cephadm 2026-03-10T07:18:45.851502+0000 mgr.vm05.wnsmpp (mgr.14195) 78 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T07:18:47.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:46.505961+0000 mon.vm05 (mon.0) 430 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:46.505961+0000 mon.vm05 (mon.0) 430 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:46.517310+0000 mon.vm05 (mon.0) 431 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:46.517310+0000 mon.vm05 (mon.0) 431 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:46.519678+0000 mon.vm05 (mon.0) 432 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T07:18:47.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:46.519678+0000 mon.vm05 (mon.0) 432 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T07:18:47.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:46.523602+0000 mon.vm05 (mon.0) 433 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:47.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:46.523602+0000 mon.vm05 (mon.0) 433 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:47.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:47.129276+0000 mon.vm05 (mon.0) 434 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:47.129276+0000 mon.vm05 (mon.0) 434 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:47.141031+0000 mon.vm05 (mon.0) 435 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:47.141031+0000 mon.vm05 (mon.0) 435 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:47.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:47.141919+0000 mon.vm05 (mon.0) 436 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T07:18:47.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:47.141919+0000 mon.vm05 (mon.0) 436 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T07:18:47.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:47.144020+0000 mon.vm05 (mon.0) 437 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:47.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 bash[21099]: audit 2026-03-10T07:18:47.144020+0000 mon.vm05 (mon.0) 437 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:47.734 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:48.026 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:47 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: cephadm 2026-03-10T07:18:46.524084+0000 mgr.vm05.wnsmpp (mgr.14195) 79 : cephadm [INF] Deploying daemon osd.2 on vm09 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: cephadm 2026-03-10T07:18:46.524084+0000 mgr.vm05.wnsmpp (mgr.14195) 79 : cephadm [INF] Deploying daemon osd.2 on vm09 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: cluster 2026-03-10T07:18:46.640377+0000 mgr.vm05.wnsmpp (mgr.14195) 80 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: cluster 2026-03-10T07:18:46.640377+0000 mgr.vm05.wnsmpp (mgr.14195) 80 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: cephadm 2026-03-10T07:18:47.145227+0000 mgr.vm05.wnsmpp (mgr.14195) 81 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: cephadm 2026-03-10T07:18:47.145227+0000 mgr.vm05.wnsmpp (mgr.14195) 81 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: audit 2026-03-10T07:18:47.994392+0000 mon.vm05 (mon.0) 438 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: audit 2026-03-10T07:18:47.994392+0000 mon.vm05 (mon.0) 438 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: audit 2026-03-10T07:18:47.999906+0000 mon.vm05 (mon.0) 439 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: audit 2026-03-10T07:18:47.999906+0000 mon.vm05 (mon.0) 439 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: audit 2026-03-10T07:18:48.001089+0000 mon.vm05 (mon.0) 440 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: audit 2026-03-10T07:18:48.001089+0000 mon.vm05 (mon.0) 440 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: audit 2026-03-10T07:18:48.004538+0000 mon.vm05 (mon.0) 441 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 bash[17520]: audit 2026-03-10T07:18:48.004538+0000 mon.vm05 (mon.0) 441 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:48.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:48 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: cephadm 2026-03-10T07:18:46.524084+0000 mgr.vm05.wnsmpp (mgr.14195) 79 : cephadm [INF] Deploying daemon osd.2 on vm09 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: cephadm 2026-03-10T07:18:46.524084+0000 mgr.vm05.wnsmpp (mgr.14195) 79 : cephadm [INF] Deploying daemon osd.2 on vm09 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: cluster 2026-03-10T07:18:46.640377+0000 mgr.vm05.wnsmpp (mgr.14195) 80 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: cluster 2026-03-10T07:18:46.640377+0000 mgr.vm05.wnsmpp (mgr.14195) 80 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: cephadm 2026-03-10T07:18:47.145227+0000 mgr.vm05.wnsmpp (mgr.14195) 81 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: cephadm 2026-03-10T07:18:47.145227+0000 mgr.vm05.wnsmpp (mgr.14195) 81 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: audit 2026-03-10T07:18:47.994392+0000 mon.vm05 (mon.0) 438 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: audit 2026-03-10T07:18:47.994392+0000 mon.vm05 (mon.0) 438 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: audit 2026-03-10T07:18:47.999906+0000 mon.vm05 (mon.0) 439 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: audit 2026-03-10T07:18:47.999906+0000 mon.vm05 (mon.0) 439 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: audit 2026-03-10T07:18:48.001089+0000 mon.vm05 (mon.0) 440 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: audit 2026-03-10T07:18:48.001089+0000 mon.vm05 (mon.0) 440 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: audit 2026-03-10T07:18:48.004538+0000 mon.vm05 (mon.0) 441 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:48.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:48 vm09 bash[21099]: audit 2026-03-10T07:18:48.004538+0000 mon.vm05 (mon.0) 441 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:49.321 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:49 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:49.566 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:49 vm05 bash[17520]: cephadm 2026-03-10T07:18:48.005527+0000 mgr.vm05.wnsmpp (mgr.14195) 82 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-10T07:18:49.566 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:49 vm05 bash[17520]: cephadm 2026-03-10T07:18:48.005527+0000 mgr.vm05.wnsmpp (mgr.14195) 82 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-10T07:18:49.566 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:49 vm05 bash[17520]: audit 2026-03-10T07:18:48.637874+0000 mon.vm05 (mon.0) 442 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:49.567 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:49 vm05 bash[17520]: audit 2026-03-10T07:18:48.637874+0000 mon.vm05 (mon.0) 442 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:49.567 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:49 vm05 bash[17520]: audit 2026-03-10T07:18:48.644990+0000 mon.vm05 (mon.0) 443 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:49.567 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:49 vm05 bash[17520]: audit 2026-03-10T07:18:48.644990+0000 mon.vm05 (mon.0) 443 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:49.567 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:49 vm05 bash[17520]: audit 2026-03-10T07:18:48.645740+0000 mon.vm05 (mon.0) 444 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T07:18:49.567 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:49 vm05 bash[17520]: audit 2026-03-10T07:18:48.645740+0000 mon.vm05 (mon.0) 444 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T07:18:49.567 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:49 vm05 bash[17520]: audit 2026-03-10T07:18:48.648599+0000 mon.vm05 (mon.0) 445 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:49.567 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:49 vm05 bash[17520]: audit 2026-03-10T07:18:48.648599+0000 mon.vm05 (mon.0) 445 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:49.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:49 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:49.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:49 vm09 bash[21099]: cephadm 2026-03-10T07:18:48.005527+0000 mgr.vm05.wnsmpp (mgr.14195) 82 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-10T07:18:49.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:49 vm09 bash[21099]: cephadm 2026-03-10T07:18:48.005527+0000 mgr.vm05.wnsmpp (mgr.14195) 82 : cephadm [INF] Deploying daemon osd.4 on vm09 2026-03-10T07:18:49.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:49 vm09 bash[21099]: audit 2026-03-10T07:18:48.637874+0000 mon.vm05 (mon.0) 442 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:49.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:49 vm09 bash[21099]: audit 2026-03-10T07:18:48.637874+0000 mon.vm05 (mon.0) 442 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:49.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:49 vm09 bash[21099]: audit 2026-03-10T07:18:48.644990+0000 mon.vm05 (mon.0) 443 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:49.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:49 vm09 bash[21099]: audit 2026-03-10T07:18:48.644990+0000 mon.vm05 (mon.0) 443 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:49.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:49 vm09 bash[21099]: audit 2026-03-10T07:18:48.645740+0000 mon.vm05 (mon.0) 444 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T07:18:49.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:49 vm09 bash[21099]: audit 2026-03-10T07:18:48.645740+0000 mon.vm05 (mon.0) 444 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T07:18:49.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:49 vm09 bash[21099]: audit 2026-03-10T07:18:48.648599+0000 mon.vm05 (mon.0) 445 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:49.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:49 vm09 bash[21099]: audit 2026-03-10T07:18:48.648599+0000 mon.vm05 (mon.0) 445 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:49.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:49 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:50.254 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: cluster 2026-03-10T07:18:48.640565+0000 mgr.vm05.wnsmpp (mgr.14195) 83 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: cluster 2026-03-10T07:18:48.640565+0000 mgr.vm05.wnsmpp (mgr.14195) 83 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: cephadm 2026-03-10T07:18:48.649092+0000 mgr.vm05.wnsmpp (mgr.14195) 84 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: cephadm 2026-03-10T07:18:48.649092+0000 mgr.vm05.wnsmpp (mgr.14195) 84 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:49.435088+0000 mon.vm05 (mon.0) 446 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:49.435088+0000 mon.vm05 (mon.0) 446 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:49.439043+0000 mon.vm05 (mon.0) 447 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:49.439043+0000 mon.vm05 (mon.0) 447 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:49.439651+0000 mon.vm05 (mon.0) 448 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:49.439651+0000 mon.vm05 (mon.0) 448 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:49.440185+0000 mon.vm05 (mon.0) 449 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:49.440185+0000 mon.vm05 (mon.0) 449 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:50.118179+0000 mon.vm05 (mon.0) 450 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:50.118179+0000 mon.vm05 (mon.0) 450 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.501 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:50.123578+0000 mon.vm05 (mon.0) 451 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.502 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:50.123578+0000 mon.vm05 (mon.0) 451 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.502 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:50.124210+0000 mon.vm05 (mon.0) 452 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T07:18:50.502 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:50.124210+0000 mon.vm05 (mon.0) 452 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T07:18:50.502 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:50.124736+0000 mon.vm05 (mon.0) 453 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:50.502 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 bash[21099]: audit 2026-03-10T07:18:50.124736+0000 mon.vm05 (mon.0) 453 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: cluster 2026-03-10T07:18:48.640565+0000 mgr.vm05.wnsmpp (mgr.14195) 83 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: cluster 2026-03-10T07:18:48.640565+0000 mgr.vm05.wnsmpp (mgr.14195) 83 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: cephadm 2026-03-10T07:18:48.649092+0000 mgr.vm05.wnsmpp (mgr.14195) 84 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: cephadm 2026-03-10T07:18:48.649092+0000 mgr.vm05.wnsmpp (mgr.14195) 84 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:49.435088+0000 mon.vm05 (mon.0) 446 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:49.435088+0000 mon.vm05 (mon.0) 446 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:49.439043+0000 mon.vm05 (mon.0) 447 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:49.439043+0000 mon.vm05 (mon.0) 447 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:49.439651+0000 mon.vm05 (mon.0) 448 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:49.439651+0000 mon.vm05 (mon.0) 448 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:49.440185+0000 mon.vm05 (mon.0) 449 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:49.440185+0000 mon.vm05 (mon.0) 449 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:50.118179+0000 mon.vm05 (mon.0) 450 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:50.118179+0000 mon.vm05 (mon.0) 450 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:50.123578+0000 mon.vm05 (mon.0) 451 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:50.123578+0000 mon.vm05 (mon.0) 451 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:50.124210+0000 mon.vm05 (mon.0) 452 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:50.124210+0000 mon.vm05 (mon.0) 452 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:50.124736+0000 mon.vm05 (mon.0) 453 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:50.510 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:50 vm05 bash[17520]: audit 2026-03-10T07:18:50.124736+0000 mon.vm05 (mon.0) 453 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:18:50.755 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:51.015 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:50 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:51.336 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:51.694 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:18:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: cephadm 2026-03-10T07:18:49.440649+0000 mgr.vm05.wnsmpp (mgr.14195) 85 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-10T07:18:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: cephadm 2026-03-10T07:18:49.440649+0000 mgr.vm05.wnsmpp (mgr.14195) 85 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-10T07:18:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: cephadm 2026-03-10T07:18:50.125173+0000 mgr.vm05.wnsmpp (mgr.14195) 86 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T07:18:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: cephadm 2026-03-10T07:18:50.125173+0000 mgr.vm05.wnsmpp (mgr.14195) 86 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T07:18:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: cluster 2026-03-10T07:18:50.643594+0000 mgr.vm05.wnsmpp (mgr.14195) 87 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: cluster 2026-03-10T07:18:50.643594+0000 mgr.vm05.wnsmpp (mgr.14195) 87 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: audit 2026-03-10T07:18:50.680905+0000 mon.vm09 (mon.1) 10 : audit [INF] from='osd.1 [v2:192.168.123.105:6802/3092910553,v1:192.168.123.105:6803/3092910553]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:18:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: audit 2026-03-10T07:18:50.680905+0000 mon.vm09 (mon.1) 10 : audit [INF] from='osd.1 [v2:192.168.123.105:6802/3092910553,v1:192.168.123.105:6803/3092910553]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:18:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: audit 2026-03-10T07:18:50.690109+0000 mon.vm05 (mon.0) 454 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:18:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: audit 2026-03-10T07:18:50.690109+0000 mon.vm05 (mon.0) 454 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:18:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: audit 2026-03-10T07:18:50.741628+0000 mon.vm05 (mon.0) 455 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:18:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: audit 2026-03-10T07:18:50.741628+0000 mon.vm05 (mon.0) 455 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:18:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: audit 2026-03-10T07:18:50.879272+0000 mon.vm05 (mon.0) 456 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: audit 2026-03-10T07:18:50.879272+0000 mon.vm05 (mon.0) 456 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: audit 2026-03-10T07:18:50.884797+0000 mon.vm05 (mon.0) 457 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:51.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:51 vm05 bash[17520]: audit 2026-03-10T07:18:50.884797+0000 mon.vm05 (mon.0) 457 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: cephadm 2026-03-10T07:18:49.440649+0000 mgr.vm05.wnsmpp (mgr.14195) 85 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: cephadm 2026-03-10T07:18:49.440649+0000 mgr.vm05.wnsmpp (mgr.14195) 85 : cephadm [INF] Deploying daemon osd.6 on vm09 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: cephadm 2026-03-10T07:18:50.125173+0000 mgr.vm05.wnsmpp (mgr.14195) 86 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: cephadm 2026-03-10T07:18:50.125173+0000 mgr.vm05.wnsmpp (mgr.14195) 86 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: cluster 2026-03-10T07:18:50.643594+0000 mgr.vm05.wnsmpp (mgr.14195) 87 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: cluster 2026-03-10T07:18:50.643594+0000 mgr.vm05.wnsmpp (mgr.14195) 87 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: audit 2026-03-10T07:18:50.680905+0000 mon.vm09 (mon.1) 10 : audit [INF] from='osd.1 [v2:192.168.123.105:6802/3092910553,v1:192.168.123.105:6803/3092910553]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: audit 2026-03-10T07:18:50.680905+0000 mon.vm09 (mon.1) 10 : audit [INF] from='osd.1 [v2:192.168.123.105:6802/3092910553,v1:192.168.123.105:6803/3092910553]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: audit 2026-03-10T07:18:50.690109+0000 mon.vm05 (mon.0) 454 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: audit 2026-03-10T07:18:50.690109+0000 mon.vm05 (mon.0) 454 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: audit 2026-03-10T07:18:50.741628+0000 mon.vm05 (mon.0) 455 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: audit 2026-03-10T07:18:50.741628+0000 mon.vm05 (mon.0) 455 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: audit 2026-03-10T07:18:50.879272+0000 mon.vm05 (mon.0) 456 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: audit 2026-03-10T07:18:50.879272+0000 mon.vm05 (mon.0) 456 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: audit 2026-03-10T07:18:50.884797+0000 mon.vm05 (mon.0) 457 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:51.967 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:51 vm09 bash[21099]: audit 2026-03-10T07:18:50.884797+0000 mon.vm05 (mon.0) 457 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.704924+0000 mon.vm05 (mon.0) 458 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.704924+0000 mon.vm05 (mon.0) 458 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.705039+0000 mon.vm05 (mon.0) 459 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.705039+0000 mon.vm05 (mon.0) 459 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.709208+0000 mon.vm09 (mon.1) 11 : audit [INF] from='osd.1 [v2:192.168.123.105:6802/3092910553,v1:192.168.123.105:6803/3092910553]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.709208+0000 mon.vm09 (mon.1) 11 : audit [INF] from='osd.1 [v2:192.168.123.105:6802/3092910553,v1:192.168.123.105:6803/3092910553]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: cluster 2026-03-10T07:18:51.758677+0000 mon.vm05 (mon.0) 460 : cluster [DBG] osdmap e14: 8 total, 0 up, 8 in 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: cluster 2026-03-10T07:18:51.758677+0000 mon.vm05 (mon.0) 460 : cluster [DBG] osdmap e14: 8 total, 0 up, 8 in 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.758965+0000 mon.vm05 (mon.0) 461 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.758965+0000 mon.vm05 (mon.0) 461 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759105+0000 mon.vm05 (mon.0) 462 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759105+0000 mon.vm05 (mon.0) 462 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759152+0000 mon.vm05 (mon.0) 463 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759152+0000 mon.vm05 (mon.0) 463 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759180+0000 mon.vm05 (mon.0) 464 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759180+0000 mon.vm05 (mon.0) 464 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759208+0000 mon.vm05 (mon.0) 465 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759208+0000 mon.vm05 (mon.0) 465 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:52.855 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759234+0000 mon.vm05 (mon.0) 466 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759234+0000 mon.vm05 (mon.0) 466 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759260+0000 mon.vm05 (mon.0) 467 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759260+0000 mon.vm05 (mon.0) 467 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759290+0000 mon.vm05 (mon.0) 468 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759290+0000 mon.vm05 (mon.0) 468 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759324+0000 mon.vm05 (mon.0) 469 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.759324+0000 mon.vm05 (mon.0) 469 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.760889+0000 mon.vm05 (mon.0) 470 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.760889+0000 mon.vm05 (mon.0) 470 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.765448+0000 mon.vm05 (mon.0) 471 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.765448+0000 mon.vm05 (mon.0) 471 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.774417+0000 mon.vm05 (mon.0) 472 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:51.774417+0000 mon.vm05 (mon.0) 472 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:52.161330+0000 mon.vm09 (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.109:6808/1704659954,v1:192.168.123.109:6809/1704659954]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:52.161330+0000 mon.vm09 (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.109:6808/1704659954,v1:192.168.123.109:6809/1704659954]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:52.166682+0000 mon.vm05 (mon.0) 473 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:52.166682+0000 mon.vm05 (mon.0) 473 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:52.448323+0000 mon.vm05 (mon.0) 474 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:18:52.856 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:52 vm09 bash[21099]: audit 2026-03-10T07:18:52.448323+0000 mon.vm05 (mon.0) 474 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:18:52.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.704924+0000 mon.vm05 (mon.0) 458 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T07:18:52.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.704924+0000 mon.vm05 (mon.0) 458 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T07:18:52.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.705039+0000 mon.vm05 (mon.0) 459 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T07:18:52.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.705039+0000 mon.vm05 (mon.0) 459 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T07:18:52.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.709208+0000 mon.vm09 (mon.1) 11 : audit [INF] from='osd.1 [v2:192.168.123.105:6802/3092910553,v1:192.168.123.105:6803/3092910553]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:52.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.709208+0000 mon.vm09 (mon.1) 11 : audit [INF] from='osd.1 [v2:192.168.123.105:6802/3092910553,v1:192.168.123.105:6803/3092910553]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: cluster 2026-03-10T07:18:51.758677+0000 mon.vm05 (mon.0) 460 : cluster [DBG] osdmap e14: 8 total, 0 up, 8 in 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: cluster 2026-03-10T07:18:51.758677+0000 mon.vm05 (mon.0) 460 : cluster [DBG] osdmap e14: 8 total, 0 up, 8 in 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.758965+0000 mon.vm05 (mon.0) 461 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.758965+0000 mon.vm05 (mon.0) 461 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759105+0000 mon.vm05 (mon.0) 462 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759105+0000 mon.vm05 (mon.0) 462 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759152+0000 mon.vm05 (mon.0) 463 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759152+0000 mon.vm05 (mon.0) 463 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759180+0000 mon.vm05 (mon.0) 464 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759180+0000 mon.vm05 (mon.0) 464 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759208+0000 mon.vm05 (mon.0) 465 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759208+0000 mon.vm05 (mon.0) 465 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759234+0000 mon.vm05 (mon.0) 466 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759234+0000 mon.vm05 (mon.0) 466 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759260+0000 mon.vm05 (mon.0) 467 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759260+0000 mon.vm05 (mon.0) 467 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759290+0000 mon.vm05 (mon.0) 468 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759290+0000 mon.vm05 (mon.0) 468 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759324+0000 mon.vm05 (mon.0) 469 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.759324+0000 mon.vm05 (mon.0) 469 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.760889+0000 mon.vm05 (mon.0) 470 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.760889+0000 mon.vm05 (mon.0) 470 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.765448+0000 mon.vm05 (mon.0) 471 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.765448+0000 mon.vm05 (mon.0) 471 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.774417+0000 mon.vm05 (mon.0) 472 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:51.774417+0000 mon.vm05 (mon.0) 472 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:52.161330+0000 mon.vm09 (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.109:6808/1704659954,v1:192.168.123.109:6809/1704659954]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:52.161330+0000 mon.vm09 (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.109:6808/1704659954,v1:192.168.123.109:6809/1704659954]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:52.166682+0000 mon.vm05 (mon.0) 473 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:52.166682+0000 mon.vm05 (mon.0) 473 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:52.448323+0000 mon.vm05 (mon.0) 474 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:18:52.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:52 vm05 bash[17520]: audit 2026-03-10T07:18:52.448323+0000 mon.vm05 (mon.0) 474 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: cluster 2026-03-10T07:18:51.700594+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: cluster 2026-03-10T07:18:51.700594+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: cluster 2026-03-10T07:18:51.700681+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: cluster 2026-03-10T07:18:51.700681+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: cluster 2026-03-10T07:18:52.643806+0000 mgr.vm05.wnsmpp (mgr.14195) 88 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: cluster 2026-03-10T07:18:52.643806+0000 mgr.vm05.wnsmpp (mgr.14195) 88 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.708419+0000 mon.vm05 (mon.0) 475 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.708419+0000 mon.vm05 (mon.0) 475 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.708553+0000 mon.vm05 (mon.0) 476 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.708553+0000 mon.vm05 (mon.0) 476 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.708598+0000 mon.vm05 (mon.0) 477 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.708598+0000 mon.vm05 (mon.0) 477 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.708625+0000 mon.vm05 (mon.0) 478 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.708625+0000 mon.vm05 (mon.0) 478 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: cluster 2026-03-10T07:18:52.711725+0000 mon.vm05 (mon.0) 479 : cluster [DBG] osdmap e15: 8 total, 0 up, 8 in 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: cluster 2026-03-10T07:18:52.711725+0000 mon.vm05 (mon.0) 479 : cluster [DBG] osdmap e15: 8 total, 0 up, 8 in 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.713068+0000 mon.vm05 (mon.0) 480 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.713068+0000 mon.vm05 (mon.0) 480 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.713205+0000 mon.vm05 (mon.0) 481 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.713205+0000 mon.vm05 (mon.0) 481 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.713678+0000 mon.vm05 (mon.0) 482 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.713678+0000 mon.vm05 (mon.0) 482 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.713745+0000 mon.vm05 (mon.0) 483 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.713745+0000 mon.vm05 (mon.0) 483 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.714485+0000 mon.vm05 (mon.0) 484 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.714485+0000 mon.vm05 (mon.0) 484 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.714648+0000 mon.vm05 (mon.0) 485 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.714648+0000 mon.vm05 (mon.0) 485 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.714696+0000 mon.vm05 (mon.0) 486 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.714696+0000 mon.vm05 (mon.0) 486 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.714741+0000 mon.vm05 (mon.0) 487 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.714741+0000 mon.vm05 (mon.0) 487 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.714784+0000 mon.vm05 (mon.0) 488 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.714784+0000 mon.vm05 (mon.0) 488 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.718359+0000 mon.vm05 (mon.0) 489 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.718359+0000 mon.vm05 (mon.0) 489 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.723056+0000 mon.vm09 (mon.1) 13 : audit [INF] from='osd.2 [v2:192.168.123.109:6808/1704659954,v1:192.168.123.109:6809/1704659954]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.723056+0000 mon.vm09 (mon.1) 13 : audit [INF] from='osd.2 [v2:192.168.123.109:6808/1704659954,v1:192.168.123.109:6809/1704659954]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.730328+0000 mon.vm05 (mon.0) 490 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.730328+0000 mon.vm05 (mon.0) 490 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.756385+0000 mon.vm05 (mon.0) 491 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:52.756385+0000 mon.vm05 (mon.0) 491 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:53.717550+0000 mon.vm05 (mon.0) 492 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:53.717550+0000 mon.vm05 (mon.0) 492 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:53.729073+0000 mon.vm05 (mon.0) 493 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:53.729073+0000 mon.vm05 (mon.0) 493 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:53.729436+0000 mon.vm05 (mon.0) 494 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:53.729436+0000 mon.vm05 (mon.0) 494 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:53.729560+0000 mon.vm05 (mon.0) 495 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:54.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:53 vm09 bash[21099]: audit 2026-03-10T07:18:53.729560+0000 mon.vm05 (mon.0) 495 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: cluster 2026-03-10T07:18:51.700594+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: cluster 2026-03-10T07:18:51.700594+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: cluster 2026-03-10T07:18:51.700681+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: cluster 2026-03-10T07:18:51.700681+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: cluster 2026-03-10T07:18:52.643806+0000 mgr.vm05.wnsmpp (mgr.14195) 88 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: cluster 2026-03-10T07:18:52.643806+0000 mgr.vm05.wnsmpp (mgr.14195) 88 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.708419+0000 mon.vm05 (mon.0) 475 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.708419+0000 mon.vm05 (mon.0) 475 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.708553+0000 mon.vm05 (mon.0) 476 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.708553+0000 mon.vm05 (mon.0) 476 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.708598+0000 mon.vm05 (mon.0) 477 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.708598+0000 mon.vm05 (mon.0) 477 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.708625+0000 mon.vm05 (mon.0) 478 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.708625+0000 mon.vm05 (mon.0) 478 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: cluster 2026-03-10T07:18:52.711725+0000 mon.vm05 (mon.0) 479 : cluster [DBG] osdmap e15: 8 total, 0 up, 8 in 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: cluster 2026-03-10T07:18:52.711725+0000 mon.vm05 (mon.0) 479 : cluster [DBG] osdmap e15: 8 total, 0 up, 8 in 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.713068+0000 mon.vm05 (mon.0) 480 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.713068+0000 mon.vm05 (mon.0) 480 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.713205+0000 mon.vm05 (mon.0) 481 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.713205+0000 mon.vm05 (mon.0) 481 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.713678+0000 mon.vm05 (mon.0) 482 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.713678+0000 mon.vm05 (mon.0) 482 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.713745+0000 mon.vm05 (mon.0) 483 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.713745+0000 mon.vm05 (mon.0) 483 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.714485+0000 mon.vm05 (mon.0) 484 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.714485+0000 mon.vm05 (mon.0) 484 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.714648+0000 mon.vm05 (mon.0) 485 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.714648+0000 mon.vm05 (mon.0) 485 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.714696+0000 mon.vm05 (mon.0) 486 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.714696+0000 mon.vm05 (mon.0) 486 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.714741+0000 mon.vm05 (mon.0) 487 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.714741+0000 mon.vm05 (mon.0) 487 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.714784+0000 mon.vm05 (mon.0) 488 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.714784+0000 mon.vm05 (mon.0) 488 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.718359+0000 mon.vm05 (mon.0) 489 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.718359+0000 mon.vm05 (mon.0) 489 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.723056+0000 mon.vm09 (mon.1) 13 : audit [INF] from='osd.2 [v2:192.168.123.109:6808/1704659954,v1:192.168.123.109:6809/1704659954]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.723056+0000 mon.vm09 (mon.1) 13 : audit [INF] from='osd.2 [v2:192.168.123.109:6808/1704659954,v1:192.168.123.109:6809/1704659954]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.730328+0000 mon.vm05 (mon.0) 490 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.730328+0000 mon.vm05 (mon.0) 490 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.756385+0000 mon.vm05 (mon.0) 491 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:52.756385+0000 mon.vm05 (mon.0) 491 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:53.717550+0000 mon.vm05 (mon.0) 492 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:53.717550+0000 mon.vm05 (mon.0) 492 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:53.729073+0000 mon.vm05 (mon.0) 493 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:53.729073+0000 mon.vm05 (mon.0) 493 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:53.729436+0000 mon.vm05 (mon.0) 494 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:53.729436+0000 mon.vm05 (mon.0) 494 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:53.729560+0000 mon.vm05 (mon.0) 495 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:54.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:53 vm05 bash[17520]: audit 2026-03-10T07:18:53.729560+0000 mon.vm05 (mon.0) 495 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: cluster 2026-03-10T07:18:51.759203+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: cluster 2026-03-10T07:18:51.759203+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: cluster 2026-03-10T07:18:51.759268+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: cluster 2026-03-10T07:18:51.759268+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: cluster 2026-03-10T07:18:53.735555+0000 mon.vm05 (mon.0) 496 : cluster [DBG] osdmap e16: 8 total, 0 up, 8 in 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: cluster 2026-03-10T07:18:53.735555+0000 mon.vm05 (mon.0) 496 : cluster [DBG] osdmap e16: 8 total, 0 up, 8 in 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752687+0000 mon.vm05 (mon.0) 497 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752687+0000 mon.vm05 (mon.0) 497 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752789+0000 mon.vm05 (mon.0) 498 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752789+0000 mon.vm05 (mon.0) 498 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752821+0000 mon.vm05 (mon.0) 499 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752821+0000 mon.vm05 (mon.0) 499 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752852+0000 mon.vm05 (mon.0) 500 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752852+0000 mon.vm05 (mon.0) 500 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752882+0000 mon.vm05 (mon.0) 501 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752882+0000 mon.vm05 (mon.0) 501 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752912+0000 mon.vm05 (mon.0) 502 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752912+0000 mon.vm05 (mon.0) 502 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752949+0000 mon.vm05 (mon.0) 503 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752949+0000 mon.vm05 (mon.0) 503 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752979+0000 mon.vm05 (mon.0) 504 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.752979+0000 mon.vm05 (mon.0) 504 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.756832+0000 mon.vm05 (mon.0) 505 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:53.756832+0000 mon.vm05 (mon.0) 505 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.003968+0000 mon.vm09 (mon.1) 14 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/2625184720,v1:192.168.123.109:6817/2625184720]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.003968+0000 mon.vm09 (mon.1) 14 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/2625184720,v1:192.168.123.109:6817/2625184720]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.008293+0000 mon.vm05 (mon.0) 506 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.008293+0000 mon.vm05 (mon.0) 506 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.610548+0000 mon.vm09 (mon.1) 15 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/2440052421,v1:192.168.123.109:6825/2440052421]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.610548+0000 mon.vm09 (mon.1) 15 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/2440052421,v1:192.168.123.109:6825/2440052421]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.614884+0000 mon.vm05 (mon.0) 507 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.614884+0000 mon.vm05 (mon.0) 507 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.663094+0000 mon.vm05 (mon.0) 508 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.663094+0000 mon.vm05 (mon.0) 508 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.717268+0000 mon.vm05 (mon.0) 509 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.717268+0000 mon.vm05 (mon.0) 509 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:54.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.728598+0000 mon.vm05 (mon.0) 510 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.728598+0000 mon.vm05 (mon.0) 510 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:54.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.754189+0000 mon.vm05 (mon.0) 511 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:54.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.754189+0000 mon.vm05 (mon.0) 511 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:54.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.755893+0000 mon.vm05 (mon.0) 512 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:54.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:54 vm05 bash[17520]: audit 2026-03-10T07:18:54.755893+0000 mon.vm05 (mon.0) 512 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: cluster 2026-03-10T07:18:51.759203+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: cluster 2026-03-10T07:18:51.759203+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: cluster 2026-03-10T07:18:51.759268+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: cluster 2026-03-10T07:18:51.759268+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: cluster 2026-03-10T07:18:53.735555+0000 mon.vm05 (mon.0) 496 : cluster [DBG] osdmap e16: 8 total, 0 up, 8 in 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: cluster 2026-03-10T07:18:53.735555+0000 mon.vm05 (mon.0) 496 : cluster [DBG] osdmap e16: 8 total, 0 up, 8 in 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752687+0000 mon.vm05 (mon.0) 497 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752687+0000 mon.vm05 (mon.0) 497 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752789+0000 mon.vm05 (mon.0) 498 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752789+0000 mon.vm05 (mon.0) 498 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752821+0000 mon.vm05 (mon.0) 499 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752821+0000 mon.vm05 (mon.0) 499 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752852+0000 mon.vm05 (mon.0) 500 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752852+0000 mon.vm05 (mon.0) 500 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752882+0000 mon.vm05 (mon.0) 501 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752882+0000 mon.vm05 (mon.0) 501 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752912+0000 mon.vm05 (mon.0) 502 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752912+0000 mon.vm05 (mon.0) 502 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752949+0000 mon.vm05 (mon.0) 503 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752949+0000 mon.vm05 (mon.0) 503 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752979+0000 mon.vm05 (mon.0) 504 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.752979+0000 mon.vm05 (mon.0) 504 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.756832+0000 mon.vm05 (mon.0) 505 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:53.756832+0000 mon.vm05 (mon.0) 505 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.003968+0000 mon.vm09 (mon.1) 14 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/2625184720,v1:192.168.123.109:6817/2625184720]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.003968+0000 mon.vm09 (mon.1) 14 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/2625184720,v1:192.168.123.109:6817/2625184720]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.008293+0000 mon.vm05 (mon.0) 506 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.008293+0000 mon.vm05 (mon.0) 506 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.610548+0000 mon.vm09 (mon.1) 15 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/2440052421,v1:192.168.123.109:6825/2440052421]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.610548+0000 mon.vm09 (mon.1) 15 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/2440052421,v1:192.168.123.109:6825/2440052421]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.614884+0000 mon.vm05 (mon.0) 507 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.614884+0000 mon.vm05 (mon.0) 507 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.663094+0000 mon.vm05 (mon.0) 508 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.663094+0000 mon.vm05 (mon.0) 508 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.717268+0000 mon.vm05 (mon.0) 509 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.717268+0000 mon.vm05 (mon.0) 509 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.728598+0000 mon.vm05 (mon.0) 510 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.728598+0000 mon.vm05 (mon.0) 510 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.754189+0000 mon.vm05 (mon.0) 511 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.754189+0000 mon.vm05 (mon.0) 511 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.755893+0000 mon.vm05 (mon.0) 512 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:55.001 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:54 vm09 bash[21099]: audit 2026-03-10T07:18:54.755893+0000 mon.vm05 (mon.0) 512 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:55.379 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: cluster 2026-03-10T07:18:53.133539+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: cluster 2026-03-10T07:18:53.133539+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: cluster 2026-03-10T07:18:53.133602+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: cluster 2026-03-10T07:18:53.133602+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: cluster 2026-03-10T07:18:53.401026+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: cluster 2026-03-10T07:18:53.401026+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: cluster 2026-03-10T07:18:53.401078+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: cluster 2026-03-10T07:18:53.401078+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: cluster 2026-03-10T07:18:54.644037+0000 mgr.vm05.wnsmpp (mgr.14195) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: cluster 2026-03-10T07:18:54.644037+0000 mgr.vm05.wnsmpp (mgr.14195) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.821285+0000 mon.vm05 (mon.0) 513 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.821285+0000 mon.vm05 (mon.0) 513 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.821342+0000 mon.vm05 (mon.0) 514 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.821342+0000 mon.vm05 (mon.0) 514 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.821383+0000 mon.vm05 (mon.0) 515 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.821383+0000 mon.vm05 (mon.0) 515 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: cluster 2026-03-10T07:18:54.839546+0000 mon.vm05 (mon.0) 516 : cluster [DBG] osdmap e17: 8 total, 0 up, 8 in 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: cluster 2026-03-10T07:18:54.839546+0000 mon.vm05 (mon.0) 516 : cluster [DBG] osdmap e17: 8 total, 0 up, 8 in 2026-03-10T07:18:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.840503+0000 mon.vm05 (mon.0) 517 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.840503+0000 mon.vm05 (mon.0) 517 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.840592+0000 mon.vm05 (mon.0) 518 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.840592+0000 mon.vm05 (mon.0) 518 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.840633+0000 mon.vm05 (mon.0) 519 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.840633+0000 mon.vm05 (mon.0) 519 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.840874+0000 mon.vm05 (mon.0) 520 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.840874+0000 mon.vm05 (mon.0) 520 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.840915+0000 mon.vm05 (mon.0) 521 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.840915+0000 mon.vm05 (mon.0) 521 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.841112+0000 mon.vm05 (mon.0) 522 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.841112+0000 mon.vm05 (mon.0) 522 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.841150+0000 mon.vm05 (mon.0) 523 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.841150+0000 mon.vm05 (mon.0) 523 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.841344+0000 mon.vm05 (mon.0) 524 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.841344+0000 mon.vm05 (mon.0) 524 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.841381+0000 mon.vm05 (mon.0) 525 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.841381+0000 mon.vm05 (mon.0) 525 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.853732+0000 mon.vm09 (mon.1) 16 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/2625184720,v1:192.168.123.109:6817/2625184720]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.853732+0000 mon.vm09 (mon.1) 16 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/2625184720,v1:192.168.123.109:6817/2625184720]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.854243+0000 mon.vm09 (mon.1) 17 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/2440052421,v1:192.168.123.109:6825/2440052421]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.854243+0000 mon.vm09 (mon.1) 17 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/2440052421,v1:192.168.123.109:6825/2440052421]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.858080+0000 mon.vm05 (mon.0) 526 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.858080+0000 mon.vm05 (mon.0) 526 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.858482+0000 mon.vm05 (mon.0) 527 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:54.858482+0000 mon.vm05 (mon.0) 527 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.101929+0000 mon.vm05 (mon.0) 528 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.101929+0000 mon.vm05 (mon.0) 528 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.329227+0000 mon.vm05 (mon.0) 529 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.329227+0000 mon.vm05 (mon.0) 529 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.647487+0000 mon.vm05 (mon.0) 530 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.647487+0000 mon.vm05 (mon.0) 530 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.717633+0000 mon.vm05 (mon.0) 531 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.717633+0000 mon.vm05 (mon.0) 531 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.729088+0000 mon.vm05 (mon.0) 532 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.729088+0000 mon.vm05 (mon.0) 532 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.754186+0000 mon.vm05 (mon.0) 533 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.754186+0000 mon.vm05 (mon.0) 533 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.755937+0000 mon.vm05 (mon.0) 534 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:56.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:55 vm09 bash[21099]: audit 2026-03-10T07:18:55.755937+0000 mon.vm05 (mon.0) 534 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: cluster 2026-03-10T07:18:53.133539+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: cluster 2026-03-10T07:18:53.133539+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: cluster 2026-03-10T07:18:53.133602+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: cluster 2026-03-10T07:18:53.133602+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: cluster 2026-03-10T07:18:53.401026+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: cluster 2026-03-10T07:18:53.401026+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: cluster 2026-03-10T07:18:53.401078+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: cluster 2026-03-10T07:18:53.401078+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: cluster 2026-03-10T07:18:54.644037+0000 mgr.vm05.wnsmpp (mgr.14195) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: cluster 2026-03-10T07:18:54.644037+0000 mgr.vm05.wnsmpp (mgr.14195) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.821285+0000 mon.vm05 (mon.0) 513 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.821285+0000 mon.vm05 (mon.0) 513 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.821342+0000 mon.vm05 (mon.0) 514 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.821342+0000 mon.vm05 (mon.0) 514 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.821383+0000 mon.vm05 (mon.0) 515 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.821383+0000 mon.vm05 (mon.0) 515 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: cluster 2026-03-10T07:18:54.839546+0000 mon.vm05 (mon.0) 516 : cluster [DBG] osdmap e17: 8 total, 0 up, 8 in 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: cluster 2026-03-10T07:18:54.839546+0000 mon.vm05 (mon.0) 516 : cluster [DBG] osdmap e17: 8 total, 0 up, 8 in 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.840503+0000 mon.vm05 (mon.0) 517 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.840503+0000 mon.vm05 (mon.0) 517 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.840592+0000 mon.vm05 (mon.0) 518 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.840592+0000 mon.vm05 (mon.0) 518 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.840633+0000 mon.vm05 (mon.0) 519 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.840633+0000 mon.vm05 (mon.0) 519 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.840874+0000 mon.vm05 (mon.0) 520 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.840874+0000 mon.vm05 (mon.0) 520 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.840915+0000 mon.vm05 (mon.0) 521 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.840915+0000 mon.vm05 (mon.0) 521 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.841112+0000 mon.vm05 (mon.0) 522 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.841112+0000 mon.vm05 (mon.0) 522 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.841150+0000 mon.vm05 (mon.0) 523 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.841150+0000 mon.vm05 (mon.0) 523 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.841344+0000 mon.vm05 (mon.0) 524 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.841344+0000 mon.vm05 (mon.0) 524 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.841381+0000 mon.vm05 (mon.0) 525 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:56.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.841381+0000 mon.vm05 (mon.0) 525 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.853732+0000 mon.vm09 (mon.1) 16 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/2625184720,v1:192.168.123.109:6817/2625184720]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.853732+0000 mon.vm09 (mon.1) 16 : audit [INF] from='osd.4 [v2:192.168.123.109:6816/2625184720,v1:192.168.123.109:6817/2625184720]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.854243+0000 mon.vm09 (mon.1) 17 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/2440052421,v1:192.168.123.109:6825/2440052421]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.854243+0000 mon.vm09 (mon.1) 17 : audit [INF] from='osd.6 [v2:192.168.123.109:6824/2440052421,v1:192.168.123.109:6825/2440052421]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.858080+0000 mon.vm05 (mon.0) 526 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.858080+0000 mon.vm05 (mon.0) 526 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.858482+0000 mon.vm05 (mon.0) 527 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:54.858482+0000 mon.vm05 (mon.0) 527 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.101929+0000 mon.vm05 (mon.0) 528 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.101929+0000 mon.vm05 (mon.0) 528 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.329227+0000 mon.vm05 (mon.0) 529 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.329227+0000 mon.vm05 (mon.0) 529 : audit [INF] from='osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620]' entity='osd.0' 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.647487+0000 mon.vm05 (mon.0) 530 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.647487+0000 mon.vm05 (mon.0) 530 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.717633+0000 mon.vm05 (mon.0) 531 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.717633+0000 mon.vm05 (mon.0) 531 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.729088+0000 mon.vm05 (mon.0) 532 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.729088+0000 mon.vm05 (mon.0) 532 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.754186+0000 mon.vm05 (mon.0) 533 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.754186+0000 mon.vm05 (mon.0) 533 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.755937+0000 mon.vm05 (mon.0) 534 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:56.212 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:55 vm05 bash[17520]: audit 2026-03-10T07:18:55.755937+0000 mon.vm05 (mon.0) 534 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:56.349 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:18:56.473 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":18,"num_osds":8,"num_up_osds":2,"osd_up_since":1773127135,"num_in_osds":8,"osd_in_since":1773127116,"num_remapped_pgs":0} 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.847561+0000 mon.vm05 (mon.0) 535 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.847561+0000 mon.vm05 (mon.0) 535 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.849437+0000 mon.vm05 (mon.0) 536 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.849437+0000 mon.vm05 (mon.0) 536 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.850501+0000 mon.vm05 (mon.0) 537 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.850501+0000 mon.vm05 (mon.0) 537 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.850574+0000 mon.vm05 (mon.0) 538 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.850574+0000 mon.vm05 (mon.0) 538 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: cluster 2026-03-10T07:18:55.864511+0000 mon.vm05 (mon.0) 539 : cluster [INF] osd.1 [v2:192.168.123.105:6802/3092910553,v1:192.168.123.105:6803/3092910553] boot 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: cluster 2026-03-10T07:18:55.864511+0000 mon.vm05 (mon.0) 539 : cluster [INF] osd.1 [v2:192.168.123.105:6802/3092910553,v1:192.168.123.105:6803/3092910553] boot 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: cluster 2026-03-10T07:18:55.864980+0000 mon.vm05 (mon.0) 540 : cluster [INF] osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620] boot 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: cluster 2026-03-10T07:18:55.864980+0000 mon.vm05 (mon.0) 540 : cluster [INF] osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620] boot 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: cluster 2026-03-10T07:18:55.865013+0000 mon.vm05 (mon.0) 541 : cluster [DBG] osdmap e18: 8 total, 2 up, 8 in 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: cluster 2026-03-10T07:18:55.865013+0000 mon.vm05 (mon.0) 541 : cluster [DBG] osdmap e18: 8 total, 2 up, 8 in 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.888447+0000 mon.vm05 (mon.0) 542 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.888447+0000 mon.vm05 (mon.0) 542 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.888576+0000 mon.vm05 (mon.0) 543 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.888576+0000 mon.vm05 (mon.0) 543 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.888698+0000 mon.vm05 (mon.0) 544 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.888698+0000 mon.vm05 (mon.0) 544 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.888764+0000 mon.vm05 (mon.0) 545 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.888764+0000 mon.vm05 (mon.0) 545 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.888968+0000 mon.vm05 (mon.0) 546 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.888968+0000 mon.vm05 (mon.0) 546 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.889026+0000 mon.vm05 (mon.0) 547 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.889026+0000 mon.vm05 (mon.0) 547 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.889072+0000 mon.vm05 (mon.0) 548 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.889072+0000 mon.vm05 (mon.0) 548 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.889116+0000 mon.vm05 (mon.0) 549 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.889116+0000 mon.vm05 (mon.0) 549 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.889159+0000 mon.vm05 (mon.0) 550 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.889159+0000 mon.vm05 (mon.0) 550 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.894490+0000 mon.vm05 (mon.0) 551 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:55.894490+0000 mon.vm05 (mon.0) 551 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:56.350278+0000 mon.vm05 (mon.0) 552 : audit [DBG] from='client.? 192.168.123.105:0/2158342796' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:57.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:56.350278+0000 mon.vm05 (mon.0) 552 : audit [DBG] from='client.? 192.168.123.105:0/2158342796' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:57.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:56.754142+0000 mon.vm05 (mon.0) 553 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:57.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:56.754142+0000 mon.vm05 (mon.0) 553 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:57.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:56.756140+0000 mon.vm05 (mon.0) 554 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:57.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:57 vm09 bash[21099]: audit 2026-03-10T07:18:56.756140+0000 mon.vm05 (mon.0) 554 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.847561+0000 mon.vm05 (mon.0) 535 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.847561+0000 mon.vm05 (mon.0) 535 : audit [INF] from='osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.849437+0000 mon.vm05 (mon.0) 536 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.849437+0000 mon.vm05 (mon.0) 536 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.850501+0000 mon.vm05 (mon.0) 537 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.850501+0000 mon.vm05 (mon.0) 537 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.850574+0000 mon.vm05 (mon.0) 538 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.850574+0000 mon.vm05 (mon.0) 538 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: cluster 2026-03-10T07:18:55.864511+0000 mon.vm05 (mon.0) 539 : cluster [INF] osd.1 [v2:192.168.123.105:6802/3092910553,v1:192.168.123.105:6803/3092910553] boot 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: cluster 2026-03-10T07:18:55.864511+0000 mon.vm05 (mon.0) 539 : cluster [INF] osd.1 [v2:192.168.123.105:6802/3092910553,v1:192.168.123.105:6803/3092910553] boot 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: cluster 2026-03-10T07:18:55.864980+0000 mon.vm05 (mon.0) 540 : cluster [INF] osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620] boot 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: cluster 2026-03-10T07:18:55.864980+0000 mon.vm05 (mon.0) 540 : cluster [INF] osd.0 [v2:192.168.123.109:6800/3105919620,v1:192.168.123.109:6801/3105919620] boot 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: cluster 2026-03-10T07:18:55.865013+0000 mon.vm05 (mon.0) 541 : cluster [DBG] osdmap e18: 8 total, 2 up, 8 in 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: cluster 2026-03-10T07:18:55.865013+0000 mon.vm05 (mon.0) 541 : cluster [DBG] osdmap e18: 8 total, 2 up, 8 in 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.888447+0000 mon.vm05 (mon.0) 542 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.888447+0000 mon.vm05 (mon.0) 542 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.888576+0000 mon.vm05 (mon.0) 543 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.888576+0000 mon.vm05 (mon.0) 543 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.888698+0000 mon.vm05 (mon.0) 544 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.888698+0000 mon.vm05 (mon.0) 544 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.888764+0000 mon.vm05 (mon.0) 545 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.888764+0000 mon.vm05 (mon.0) 545 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.888968+0000 mon.vm05 (mon.0) 546 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.888968+0000 mon.vm05 (mon.0) 546 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.889026+0000 mon.vm05 (mon.0) 547 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.889026+0000 mon.vm05 (mon.0) 547 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.889072+0000 mon.vm05 (mon.0) 548 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.889072+0000 mon.vm05 (mon.0) 548 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:57.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.889116+0000 mon.vm05 (mon.0) 549 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:57.436 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.889116+0000 mon.vm05 (mon.0) 549 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:57.436 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.889159+0000 mon.vm05 (mon.0) 550 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:57.436 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.889159+0000 mon.vm05 (mon.0) 550 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:57.436 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.894490+0000 mon.vm05 (mon.0) 551 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:57.436 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:55.894490+0000 mon.vm05 (mon.0) 551 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:57.436 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:56.350278+0000 mon.vm05 (mon.0) 552 : audit [DBG] from='client.? 192.168.123.105:0/2158342796' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:57.436 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:56.350278+0000 mon.vm05 (mon.0) 552 : audit [DBG] from='client.? 192.168.123.105:0/2158342796' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:18:57.436 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:56.754142+0000 mon.vm05 (mon.0) 553 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:57.436 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:56.754142+0000 mon.vm05 (mon.0) 553 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:57.436 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:56.756140+0000 mon.vm05 (mon.0) 554 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:57.436 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:57 vm05 bash[17520]: audit 2026-03-10T07:18:56.756140+0000 mon.vm05 (mon.0) 554 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:57.474 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd stat -f json 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:54.998289+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:54.998289+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:54.998367+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:54.998367+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:55.592360+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:55.592360+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:55.592424+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:55.592424+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:55.701597+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:55.701597+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:55.701630+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:55.701630+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:56.644285+0000 mgr.vm05.wnsmpp (mgr.14195) 90 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:56.644285+0000 mgr.vm05.wnsmpp (mgr.14195) 90 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.002808+0000 mon.vm05 (mon.0) 555 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.002808+0000 mon.vm05 (mon.0) 555 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.003132+0000 mon.vm05 (mon.0) 556 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.003132+0000 mon.vm05 (mon.0) 556 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.003322+0000 mon.vm05 (mon.0) 557 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:58.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.003322+0000 mon.vm05 (mon.0) 557 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.028951+0000 mon.vm05 (mon.0) 558 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.028951+0000 mon.vm05 (mon.0) 558 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:57.039385+0000 mon.vm05 (mon.0) 559 : cluster [INF] osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664] boot 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:57.039385+0000 mon.vm05 (mon.0) 559 : cluster [INF] osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664] boot 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:57.039412+0000 mon.vm05 (mon.0) 560 : cluster [DBG] osdmap e19: 8 total, 3 up, 8 in 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: cluster 2026-03-10T07:18:57.039412+0000 mon.vm05 (mon.0) 560 : cluster [DBG] osdmap e19: 8 total, 3 up, 8 in 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.044758+0000 mon.vm05 (mon.0) 561 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.044758+0000 mon.vm05 (mon.0) 561 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.045225+0000 mon.vm05 (mon.0) 562 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.045225+0000 mon.vm05 (mon.0) 562 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.045396+0000 mon.vm05 (mon.0) 563 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.045396+0000 mon.vm05 (mon.0) 563 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.045463+0000 mon.vm05 (mon.0) 564 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.045463+0000 mon.vm05 (mon.0) 564 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.047067+0000 mon.vm05 (mon.0) 565 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.047067+0000 mon.vm05 (mon.0) 565 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.047356+0000 mon.vm05 (mon.0) 566 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.047356+0000 mon.vm05 (mon.0) 566 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.048767+0000 mon.vm05 (mon.0) 567 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.048767+0000 mon.vm05 (mon.0) 567 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.121078+0000 mon.vm05 (mon.0) 568 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.121078+0000 mon.vm05 (mon.0) 568 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.272248+0000 mon.vm05 (mon.0) 569 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.272248+0000 mon.vm05 (mon.0) 569 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.673347+0000 mon.vm05 (mon.0) 570 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.673347+0000 mon.vm05 (mon.0) 570 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.754731+0000 mon.vm05 (mon.0) 571 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.754731+0000 mon.vm05 (mon.0) 571 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.756207+0000 mon.vm05 (mon.0) 572 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.756207+0000 mon.vm05 (mon.0) 572 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.885611+0000 mon.vm05 (mon.0) 573 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.885611+0000 mon.vm05 (mon.0) 573 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.886119+0000 mon.vm05 (mon.0) 574 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:57.886119+0000 mon.vm05 (mon.0) 574 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:58.048068+0000 mon.vm05 (mon.0) 575 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:58.581 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:58 vm09 bash[21099]: audit 2026-03-10T07:18:58.048068+0000 mon.vm05 (mon.0) 575 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:58.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:54.998289+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:58.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:54.998289+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:58.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:54.998367+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:58.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:54.998367+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:55.592360+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:55.592360+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:55.592424+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:55.592424+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:55.701597+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:55.701597+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:55.701630+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:55.701630+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:56.644285+0000 mgr.vm05.wnsmpp (mgr.14195) 90 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:56.644285+0000 mgr.vm05.wnsmpp (mgr.14195) 90 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.002808+0000 mon.vm05 (mon.0) 555 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.002808+0000 mon.vm05 (mon.0) 555 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.003132+0000 mon.vm05 (mon.0) 556 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.003132+0000 mon.vm05 (mon.0) 556 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.003322+0000 mon.vm05 (mon.0) 557 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.003322+0000 mon.vm05 (mon.0) 557 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.028951+0000 mon.vm05 (mon.0) 558 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.028951+0000 mon.vm05 (mon.0) 558 : audit [INF] from='osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:57.039385+0000 mon.vm05 (mon.0) 559 : cluster [INF] osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664] boot 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:57.039385+0000 mon.vm05 (mon.0) 559 : cluster [INF] osd.5 [v2:192.168.123.105:6818/3186003664,v1:192.168.123.105:6819/3186003664] boot 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:57.039412+0000 mon.vm05 (mon.0) 560 : cluster [DBG] osdmap e19: 8 total, 3 up, 8 in 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: cluster 2026-03-10T07:18:57.039412+0000 mon.vm05 (mon.0) 560 : cluster [DBG] osdmap e19: 8 total, 3 up, 8 in 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.044758+0000 mon.vm05 (mon.0) 561 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.044758+0000 mon.vm05 (mon.0) 561 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.045225+0000 mon.vm05 (mon.0) 562 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.045225+0000 mon.vm05 (mon.0) 562 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.045396+0000 mon.vm05 (mon.0) 563 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.045396+0000 mon.vm05 (mon.0) 563 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.045463+0000 mon.vm05 (mon.0) 564 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.045463+0000 mon.vm05 (mon.0) 564 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.047067+0000 mon.vm05 (mon.0) 565 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.047067+0000 mon.vm05 (mon.0) 565 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.047356+0000 mon.vm05 (mon.0) 566 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.047356+0000 mon.vm05 (mon.0) 566 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.048767+0000 mon.vm05 (mon.0) 567 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.048767+0000 mon.vm05 (mon.0) 567 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.121078+0000 mon.vm05 (mon.0) 568 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.121078+0000 mon.vm05 (mon.0) 568 : audit [INF] from='osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065]' entity='osd.3' 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.272248+0000 mon.vm05 (mon.0) 569 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.272248+0000 mon.vm05 (mon.0) 569 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.673347+0000 mon.vm05 (mon.0) 570 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.673347+0000 mon.vm05 (mon.0) 570 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.754731+0000 mon.vm05 (mon.0) 571 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.754731+0000 mon.vm05 (mon.0) 571 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.756207+0000 mon.vm05 (mon.0) 572 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.756207+0000 mon.vm05 (mon.0) 572 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.885611+0000 mon.vm05 (mon.0) 573 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:58.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.885611+0000 mon.vm05 (mon.0) 573 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:58.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.886119+0000 mon.vm05 (mon.0) 574 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:58.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:57.886119+0000 mon.vm05 (mon.0) 574 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:58.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:58.048068+0000 mon.vm05 (mon.0) 575 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:58.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:58 vm05 bash[17520]: audit 2026-03-10T07:18:58.048068+0000 mon.vm05 (mon.0) 575 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:59.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: cluster 2026-03-10T07:18:56.643656+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:59.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: cluster 2026-03-10T07:18:56.643656+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:59.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: cluster 2026-03-10T07:18:56.643742+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:59.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: cluster 2026-03-10T07:18:56.643742+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:59.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: cluster 2026-03-10T07:18:58.341942+0000 mon.vm05 (mon.0) 576 : cluster [INF] osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065] boot 2026-03-10T07:18:59.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: cluster 2026-03-10T07:18:58.341942+0000 mon.vm05 (mon.0) 576 : cluster [INF] osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065] boot 2026-03-10T07:18:59.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: cluster 2026-03-10T07:18:58.343087+0000 mon.vm05 (mon.0) 577 : cluster [INF] osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241] boot 2026-03-10T07:18:59.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: cluster 2026-03-10T07:18:58.343087+0000 mon.vm05 (mon.0) 577 : cluster [INF] osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241] boot 2026-03-10T07:18:59.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: cluster 2026-03-10T07:18:58.343107+0000 mon.vm05 (mon.0) 578 : cluster [INF] osd.2 [v2:192.168.123.109:6808/1704659954,v1:192.168.123.109:6809/1704659954] boot 2026-03-10T07:18:59.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: cluster 2026-03-10T07:18:58.343107+0000 mon.vm05 (mon.0) 578 : cluster [INF] osd.2 [v2:192.168.123.109:6808/1704659954,v1:192.168.123.109:6809/1704659954] boot 2026-03-10T07:18:59.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: cluster 2026-03-10T07:18:58.343164+0000 mon.vm05 (mon.0) 579 : cluster [DBG] osdmap e20: 8 total, 6 up, 8 in 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: cluster 2026-03-10T07:18:58.343164+0000 mon.vm05 (mon.0) 579 : cluster [DBG] osdmap e20: 8 total, 6 up, 8 in 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.343588+0000 mon.vm05 (mon.0) 580 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.343588+0000 mon.vm05 (mon.0) 580 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.344190+0000 mon.vm05 (mon.0) 581 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.344190+0000 mon.vm05 (mon.0) 581 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.348351+0000 mon.vm05 (mon.0) 582 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.348351+0000 mon.vm05 (mon.0) 582 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.348658+0000 mon.vm05 (mon.0) 583 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.348658+0000 mon.vm05 (mon.0) 583 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.349047+0000 mon.vm05 (mon.0) 584 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.349047+0000 mon.vm05 (mon.0) 584 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.535668+0000 mon.vm05 (mon.0) 585 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.535668+0000 mon.vm05 (mon.0) 585 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.557259+0000 mon.vm05 (mon.0) 586 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.557259+0000 mon.vm05 (mon.0) 586 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.673537+0000 mon.vm05 (mon.0) 587 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.673537+0000 mon.vm05 (mon.0) 587 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.850456+0000 mon.vm05 (mon.0) 588 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.850456+0000 mon.vm05 (mon.0) 588 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.854789+0000 mon.vm05 (mon.0) 589 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.854789+0000 mon.vm05 (mon.0) 589 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.871609+0000 mon.vm05 (mon.0) 590 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.871609+0000 mon.vm05 (mon.0) 590 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.872692+0000 mon.vm05 (mon.0) 591 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:58.872692+0000 mon.vm05 (mon.0) 591 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:59.006812+0000 mon.vm05 (mon.0) 592 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:59.006812+0000 mon.vm05 (mon.0) 592 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:59.012529+0000 mon.vm05 (mon.0) 593 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:59.012529+0000 mon.vm05 (mon.0) 593 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:59.058184+0000 mon.vm05 (mon.0) 594 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:18:59.675 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:18:59 vm09 bash[21099]: audit 2026-03-10T07:18:59.058184+0000 mon.vm05 (mon.0) 594 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:18:59.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: cluster 2026-03-10T07:18:56.643656+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:59.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: cluster 2026-03-10T07:18:56.643656+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T07:18:59.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: cluster 2026-03-10T07:18:56.643742+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:59.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: cluster 2026-03-10T07:18:56.643742+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T07:18:59.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: cluster 2026-03-10T07:18:58.341942+0000 mon.vm05 (mon.0) 576 : cluster [INF] osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065] boot 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: cluster 2026-03-10T07:18:58.341942+0000 mon.vm05 (mon.0) 576 : cluster [INF] osd.3 [v2:192.168.123.105:6810/519436065,v1:192.168.123.105:6811/519436065] boot 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: cluster 2026-03-10T07:18:58.343087+0000 mon.vm05 (mon.0) 577 : cluster [INF] osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241] boot 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: cluster 2026-03-10T07:18:58.343087+0000 mon.vm05 (mon.0) 577 : cluster [INF] osd.7 [v2:192.168.123.105:6826/1075500241,v1:192.168.123.105:6827/1075500241] boot 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: cluster 2026-03-10T07:18:58.343107+0000 mon.vm05 (mon.0) 578 : cluster [INF] osd.2 [v2:192.168.123.109:6808/1704659954,v1:192.168.123.109:6809/1704659954] boot 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: cluster 2026-03-10T07:18:58.343107+0000 mon.vm05 (mon.0) 578 : cluster [INF] osd.2 [v2:192.168.123.109:6808/1704659954,v1:192.168.123.109:6809/1704659954] boot 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: cluster 2026-03-10T07:18:58.343164+0000 mon.vm05 (mon.0) 579 : cluster [DBG] osdmap e20: 8 total, 6 up, 8 in 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: cluster 2026-03-10T07:18:58.343164+0000 mon.vm05 (mon.0) 579 : cluster [DBG] osdmap e20: 8 total, 6 up, 8 in 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.343588+0000 mon.vm05 (mon.0) 580 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.343588+0000 mon.vm05 (mon.0) 580 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.344190+0000 mon.vm05 (mon.0) 581 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.344190+0000 mon.vm05 (mon.0) 581 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.348351+0000 mon.vm05 (mon.0) 582 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.348351+0000 mon.vm05 (mon.0) 582 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.348658+0000 mon.vm05 (mon.0) 583 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.348658+0000 mon.vm05 (mon.0) 583 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.349047+0000 mon.vm05 (mon.0) 584 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.349047+0000 mon.vm05 (mon.0) 584 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.535668+0000 mon.vm05 (mon.0) 585 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.535668+0000 mon.vm05 (mon.0) 585 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.557259+0000 mon.vm05 (mon.0) 586 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.557259+0000 mon.vm05 (mon.0) 586 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.673537+0000 mon.vm05 (mon.0) 587 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.673537+0000 mon.vm05 (mon.0) 587 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.850456+0000 mon.vm05 (mon.0) 588 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.850456+0000 mon.vm05 (mon.0) 588 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.854789+0000 mon.vm05 (mon.0) 589 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.854789+0000 mon.vm05 (mon.0) 589 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.871609+0000 mon.vm05 (mon.0) 590 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.871609+0000 mon.vm05 (mon.0) 590 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.872692+0000 mon.vm05 (mon.0) 591 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:58.872692+0000 mon.vm05 (mon.0) 591 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:59.006812+0000 mon.vm05 (mon.0) 592 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:59.006812+0000 mon.vm05 (mon.0) 592 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:59.012529+0000 mon.vm05 (mon.0) 593 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:59.012529+0000 mon.vm05 (mon.0) 593 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:59.058184+0000 mon.vm05 (mon.0) 594 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:18:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:18:59 vm05 bash[17520]: audit 2026-03-10T07:18:59.058184+0000 mon.vm05 (mon.0) 594 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:19:00.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: cluster 2026-03-10T07:18:58.644517+0000 mgr.vm05.wnsmpp (mgr.14195) 91 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:19:00.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: cluster 2026-03-10T07:18:58.644517+0000 mgr.vm05.wnsmpp (mgr.14195) 91 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:19:00.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: audit 2026-03-10T07:18:59.369761+0000 mon.vm05 (mon.0) 595 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T07:19:00.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: audit 2026-03-10T07:18:59.369761+0000 mon.vm05 (mon.0) 595 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T07:19:00.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: cluster 2026-03-10T07:18:59.373084+0000 mon.vm05 (mon.0) 596 : cluster [INF] osd.6 [v2:192.168.123.109:6824/2440052421,v1:192.168.123.109:6825/2440052421] boot 2026-03-10T07:19:00.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: cluster 2026-03-10T07:18:59.373084+0000 mon.vm05 (mon.0) 596 : cluster [INF] osd.6 [v2:192.168.123.109:6824/2440052421,v1:192.168.123.109:6825/2440052421] boot 2026-03-10T07:19:00.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: cluster 2026-03-10T07:18:59.373108+0000 mon.vm05 (mon.0) 597 : cluster [INF] osd.4 [v2:192.168.123.109:6816/2625184720,v1:192.168.123.109:6817/2625184720] boot 2026-03-10T07:19:00.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: cluster 2026-03-10T07:18:59.373108+0000 mon.vm05 (mon.0) 597 : cluster [INF] osd.4 [v2:192.168.123.109:6816/2625184720,v1:192.168.123.109:6817/2625184720] boot 2026-03-10T07:19:00.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: cluster 2026-03-10T07:18:59.373124+0000 mon.vm05 (mon.0) 598 : cluster [DBG] osdmap e21: 8 total, 8 up, 8 in 2026-03-10T07:19:00.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: cluster 2026-03-10T07:18:59.373124+0000 mon.vm05 (mon.0) 598 : cluster [DBG] osdmap e21: 8 total, 8 up, 8 in 2026-03-10T07:19:00.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: audit 2026-03-10T07:18:59.373452+0000 mon.vm05 (mon.0) 599 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:19:00.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: audit 2026-03-10T07:18:59.373452+0000 mon.vm05 (mon.0) 599 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:19:00.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: audit 2026-03-10T07:18:59.373611+0000 mon.vm05 (mon.0) 600 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:19:00.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: audit 2026-03-10T07:18:59.373611+0000 mon.vm05 (mon.0) 600 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:19:00.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: audit 2026-03-10T07:18:59.376387+0000 mon.vm05 (mon.0) 601 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:19:00.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:00 vm05 bash[17520]: audit 2026-03-10T07:18:59.376387+0000 mon.vm05 (mon.0) 601 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: cluster 2026-03-10T07:18:58.644517+0000 mgr.vm05.wnsmpp (mgr.14195) 91 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: cluster 2026-03-10T07:18:58.644517+0000 mgr.vm05.wnsmpp (mgr.14195) 91 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: audit 2026-03-10T07:18:59.369761+0000 mon.vm05 (mon.0) 595 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: audit 2026-03-10T07:18:59.369761+0000 mon.vm05 (mon.0) 595 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: cluster 2026-03-10T07:18:59.373084+0000 mon.vm05 (mon.0) 596 : cluster [INF] osd.6 [v2:192.168.123.109:6824/2440052421,v1:192.168.123.109:6825/2440052421] boot 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: cluster 2026-03-10T07:18:59.373084+0000 mon.vm05 (mon.0) 596 : cluster [INF] osd.6 [v2:192.168.123.109:6824/2440052421,v1:192.168.123.109:6825/2440052421] boot 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: cluster 2026-03-10T07:18:59.373108+0000 mon.vm05 (mon.0) 597 : cluster [INF] osd.4 [v2:192.168.123.109:6816/2625184720,v1:192.168.123.109:6817/2625184720] boot 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: cluster 2026-03-10T07:18:59.373108+0000 mon.vm05 (mon.0) 597 : cluster [INF] osd.4 [v2:192.168.123.109:6816/2625184720,v1:192.168.123.109:6817/2625184720] boot 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: cluster 2026-03-10T07:18:59.373124+0000 mon.vm05 (mon.0) 598 : cluster [DBG] osdmap e21: 8 total, 8 up, 8 in 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: cluster 2026-03-10T07:18:59.373124+0000 mon.vm05 (mon.0) 598 : cluster [DBG] osdmap e21: 8 total, 8 up, 8 in 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: audit 2026-03-10T07:18:59.373452+0000 mon.vm05 (mon.0) 599 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: audit 2026-03-10T07:18:59.373452+0000 mon.vm05 (mon.0) 599 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: audit 2026-03-10T07:18:59.373611+0000 mon.vm05 (mon.0) 600 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: audit 2026-03-10T07:18:59.373611+0000 mon.vm05 (mon.0) 600 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: audit 2026-03-10T07:18:59.376387+0000 mon.vm05 (mon.0) 601 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:19:00.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:00 vm09 bash[21099]: audit 2026-03-10T07:18:59.376387+0000 mon.vm05 (mon.0) 601 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:19:01.459 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.377050+0000 mon.vm05 (mon.0) 602 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.377050+0000 mon.vm05 (mon.0) 602 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: cluster 2026-03-10T07:19:00.380486+0000 mon.vm05 (mon.0) 603 : cluster [DBG] osdmap e22: 8 total, 8 up, 8 in 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: cluster 2026-03-10T07:19:00.380486+0000 mon.vm05 (mon.0) 603 : cluster [DBG] osdmap e22: 8 total, 8 up, 8 in 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.958961+0000 mon.vm05 (mon.0) 604 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.958961+0000 mon.vm05 (mon.0) 604 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.976683+0000 mon.vm09 (mon.1) 18 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.976683+0000 mon.vm09 (mon.1) 18 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.978338+0000 mon.vm05 (mon.0) 605 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.978338+0000 mon.vm05 (mon.0) 605 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.978593+0000 mon.vm05 (mon.0) 606 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.978593+0000 mon.vm05 (mon.0) 606 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.978825+0000 mon.vm05 (mon.0) 607 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.978825+0000 mon.vm05 (mon.0) 607 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.980869+0000 mon.vm05 (mon.0) 608 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.980869+0000 mon.vm05 (mon.0) 608 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.980920+0000 mon.vm05 (mon.0) 609 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.980920+0000 mon.vm05 (mon.0) 609 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.995594+0000 mon.vm09 (mon.1) 19 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:19:01.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:01 vm09 bash[21099]: audit 2026-03-10T07:19:00.995594+0000 mon.vm09 (mon.1) 19 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:19:01.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.377050+0000 mon.vm05 (mon.0) 602 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:19:01.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.377050+0000 mon.vm05 (mon.0) 602 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:19:01.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: cluster 2026-03-10T07:19:00.380486+0000 mon.vm05 (mon.0) 603 : cluster [DBG] osdmap e22: 8 total, 8 up, 8 in 2026-03-10T07:19:01.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: cluster 2026-03-10T07:19:00.380486+0000 mon.vm05 (mon.0) 603 : cluster [DBG] osdmap e22: 8 total, 8 up, 8 in 2026-03-10T07:19:01.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.958961+0000 mon.vm05 (mon.0) 604 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:19:01.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.958961+0000 mon.vm05 (mon.0) 604 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:19:01.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.976683+0000 mon.vm09 (mon.1) 18 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:19:01.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.976683+0000 mon.vm09 (mon.1) 18 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T07:19:01.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.978338+0000 mon.vm05 (mon.0) 605 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:19:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.978338+0000 mon.vm05 (mon.0) 605 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:19:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.978593+0000 mon.vm05 (mon.0) 606 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:19:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.978593+0000 mon.vm05 (mon.0) 606 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:19:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.978825+0000 mon.vm05 (mon.0) 607 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:19:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.978825+0000 mon.vm05 (mon.0) 607 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:19:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.980869+0000 mon.vm05 (mon.0) 608 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:19:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.980869+0000 mon.vm05 (mon.0) 608 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T07:19:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.980920+0000 mon.vm05 (mon.0) 609 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:19:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.980920+0000 mon.vm05 (mon.0) 609 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T07:19:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.995594+0000 mon.vm09 (mon.1) 19 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:19:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:01 vm05 bash[17520]: audit 2026-03-10T07:19:00.995594+0000 mon.vm09 (mon.1) 19 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T07:19:01.748 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:19:01.811 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":23,"num_osds":8,"num_up_osds":8,"osd_up_since":1773127139,"num_in_osds":8,"osd_in_since":1773127116,"num_remapped_pgs":0} 2026-03-10T07:19:01.811 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd dump --format=json 2026-03-10T07:19:02.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:02 vm09 bash[21099]: cluster 2026-03-10T07:19:00.644854+0000 mgr.vm05.wnsmpp (mgr.14195) 92 : cluster [DBG] pgmap v49: 1 pgs: 1 unknown; 0 B data, 1.8 GiB used, 158 GiB / 160 GiB avail 2026-03-10T07:19:02.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:02 vm09 bash[21099]: cluster 2026-03-10T07:19:00.644854+0000 mgr.vm05.wnsmpp (mgr.14195) 92 : cluster [DBG] pgmap v49: 1 pgs: 1 unknown; 0 B data, 1.8 GiB used, 158 GiB / 160 GiB avail 2026-03-10T07:19:02.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:02 vm09 bash[21099]: cluster 2026-03-10T07:19:01.399081+0000 mon.vm05 (mon.0) 610 : cluster [DBG] osdmap e23: 8 total, 8 up, 8 in 2026-03-10T07:19:02.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:02 vm09 bash[21099]: cluster 2026-03-10T07:19:01.399081+0000 mon.vm05 (mon.0) 610 : cluster [DBG] osdmap e23: 8 total, 8 up, 8 in 2026-03-10T07:19:02.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:02 vm09 bash[21099]: audit 2026-03-10T07:19:01.749878+0000 mon.vm05 (mon.0) 611 : audit [DBG] from='client.? 192.168.123.105:0/82960151' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:19:02.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:02 vm09 bash[21099]: audit 2026-03-10T07:19:01.749878+0000 mon.vm05 (mon.0) 611 : audit [DBG] from='client.? 192.168.123.105:0/82960151' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:19:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:02 vm05 bash[17520]: cluster 2026-03-10T07:19:00.644854+0000 mgr.vm05.wnsmpp (mgr.14195) 92 : cluster [DBG] pgmap v49: 1 pgs: 1 unknown; 0 B data, 1.8 GiB used, 158 GiB / 160 GiB avail 2026-03-10T07:19:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:02 vm05 bash[17520]: cluster 2026-03-10T07:19:00.644854+0000 mgr.vm05.wnsmpp (mgr.14195) 92 : cluster [DBG] pgmap v49: 1 pgs: 1 unknown; 0 B data, 1.8 GiB used, 158 GiB / 160 GiB avail 2026-03-10T07:19:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:02 vm05 bash[17520]: cluster 2026-03-10T07:19:01.399081+0000 mon.vm05 (mon.0) 610 : cluster [DBG] osdmap e23: 8 total, 8 up, 8 in 2026-03-10T07:19:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:02 vm05 bash[17520]: cluster 2026-03-10T07:19:01.399081+0000 mon.vm05 (mon.0) 610 : cluster [DBG] osdmap e23: 8 total, 8 up, 8 in 2026-03-10T07:19:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:02 vm05 bash[17520]: audit 2026-03-10T07:19:01.749878+0000 mon.vm05 (mon.0) 611 : audit [DBG] from='client.? 192.168.123.105:0/82960151' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:19:02.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:02 vm05 bash[17520]: audit 2026-03-10T07:19:01.749878+0000 mon.vm05 (mon.0) 611 : audit [DBG] from='client.? 192.168.123.105:0/82960151' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:19:04.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:04 vm09 bash[21099]: cluster 2026-03-10T07:19:02.645200+0000 mgr.vm05.wnsmpp (mgr.14195) 93 : cluster [DBG] pgmap v51: 1 pgs: 1 unknown; 0 B data, 1.8 GiB used, 158 GiB / 160 GiB avail 2026-03-10T07:19:04.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:04 vm09 bash[21099]: cluster 2026-03-10T07:19:02.645200+0000 mgr.vm05.wnsmpp (mgr.14195) 93 : cluster [DBG] pgmap v51: 1 pgs: 1 unknown; 0 B data, 1.8 GiB used, 158 GiB / 160 GiB avail 2026-03-10T07:19:04.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:04 vm09 bash[21099]: cluster 2026-03-10T07:19:03.424537+0000 mon.vm05 (mon.0) 612 : cluster [DBG] mgrmap e18: vm05.wnsmpp(active, since 80s), standbys: vm09.rfdvwa 2026-03-10T07:19:04.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:04 vm09 bash[21099]: cluster 2026-03-10T07:19:03.424537+0000 mon.vm05 (mon.0) 612 : cluster [DBG] mgrmap e18: vm05.wnsmpp(active, since 80s), standbys: vm09.rfdvwa 2026-03-10T07:19:04.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:04 vm09 bash[21099]: audit 2026-03-10T07:19:04.132198+0000 mon.vm05 (mon.0) 613 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:04 vm09 bash[21099]: audit 2026-03-10T07:19:04.132198+0000 mon.vm05 (mon.0) 613 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:04 vm09 bash[21099]: audit 2026-03-10T07:19:04.137870+0000 mon.vm05 (mon.0) 614 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:04 vm09 bash[21099]: audit 2026-03-10T07:19:04.137870+0000 mon.vm05 (mon.0) 614 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:04 vm09 bash[21099]: audit 2026-03-10T07:19:04.237556+0000 mon.vm05 (mon.0) 615 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:04 vm09 bash[21099]: audit 2026-03-10T07:19:04.237556+0000 mon.vm05 (mon.0) 615 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:04 vm09 bash[21099]: audit 2026-03-10T07:19:04.243095+0000 mon.vm05 (mon.0) 616 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:04 vm09 bash[21099]: audit 2026-03-10T07:19:04.243095+0000 mon.vm05 (mon.0) 616 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:04 vm05 bash[17520]: cluster 2026-03-10T07:19:02.645200+0000 mgr.vm05.wnsmpp (mgr.14195) 93 : cluster [DBG] pgmap v51: 1 pgs: 1 unknown; 0 B data, 1.8 GiB used, 158 GiB / 160 GiB avail 2026-03-10T07:19:04.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:04 vm05 bash[17520]: cluster 2026-03-10T07:19:02.645200+0000 mgr.vm05.wnsmpp (mgr.14195) 93 : cluster [DBG] pgmap v51: 1 pgs: 1 unknown; 0 B data, 1.8 GiB used, 158 GiB / 160 GiB avail 2026-03-10T07:19:04.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:04 vm05 bash[17520]: cluster 2026-03-10T07:19:03.424537+0000 mon.vm05 (mon.0) 612 : cluster [DBG] mgrmap e18: vm05.wnsmpp(active, since 80s), standbys: vm09.rfdvwa 2026-03-10T07:19:04.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:04 vm05 bash[17520]: cluster 2026-03-10T07:19:03.424537+0000 mon.vm05 (mon.0) 612 : cluster [DBG] mgrmap e18: vm05.wnsmpp(active, since 80s), standbys: vm09.rfdvwa 2026-03-10T07:19:04.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:04 vm05 bash[17520]: audit 2026-03-10T07:19:04.132198+0000 mon.vm05 (mon.0) 613 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:04 vm05 bash[17520]: audit 2026-03-10T07:19:04.132198+0000 mon.vm05 (mon.0) 613 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:04 vm05 bash[17520]: audit 2026-03-10T07:19:04.137870+0000 mon.vm05 (mon.0) 614 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:04 vm05 bash[17520]: audit 2026-03-10T07:19:04.137870+0000 mon.vm05 (mon.0) 614 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:04 vm05 bash[17520]: audit 2026-03-10T07:19:04.237556+0000 mon.vm05 (mon.0) 615 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:04 vm05 bash[17520]: audit 2026-03-10T07:19:04.237556+0000 mon.vm05 (mon.0) 615 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:04 vm05 bash[17520]: audit 2026-03-10T07:19:04.243095+0000 mon.vm05 (mon.0) 616 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:04.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:04 vm05 bash[17520]: audit 2026-03-10T07:19:04.243095+0000 mon.vm05 (mon.0) 616 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:05.496 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:05.767 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:05 vm09 bash[21099]: cluster 2026-03-10T07:19:04.645503+0000 mgr.vm05.wnsmpp (mgr.14195) 94 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:19:05.767 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:05 vm09 bash[21099]: cluster 2026-03-10T07:19:04.645503+0000 mgr.vm05.wnsmpp (mgr.14195) 94 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:19:05.769 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:19:05.769 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":23,"fsid":"f0f57d3c-1c50-11f1-837e-f755e850132e","created":"2026-03-10T07:16:39.907350+0000","modified":"2026-03-10T07:19:01.385461+0000","last_up_change":"2026-03-10T07:18:59.341469+0000","last_in_change":"2026-03-10T07:18:36.269959+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T07:18:58.677599+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"23","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":3105919620},{"type":"v1","addr":"192.168.123.109:6801","nonce":3105919620}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":3105919620},{"type":"v1","addr":"192.168.123.109:6803","nonce":3105919620}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":3105919620},{"type":"v1","addr":"192.168.123.109:6807","nonce":3105919620}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":3105919620},{"type":"v1","addr":"192.168.123.109:6805","nonce":3105919620}]},"public_addr":"192.168.123.109:6801/3105919620","cluster_addr":"192.168.123.109:6803/3105919620","heartbeat_back_addr":"192.168.123.109:6807/3105919620","heartbeat_front_addr":"192.168.123.109:6805/3105919620","state":["exists","up"]},{"osd":1,"uuid":"165a1577-c628-4924-8467-6ee181e4ae8f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":3092910553},{"type":"v1","addr":"192.168.123.105:6803","nonce":3092910553}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":3092910553},{"type":"v1","addr":"192.168.123.105:6805","nonce":3092910553}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":3092910553},{"type":"v1","addr":"192.168.123.105:6809","nonce":3092910553}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":3092910553},{"type":"v1","addr":"192.168.123.105:6807","nonce":3092910553}]},"public_addr":"192.168.123.105:6803/3092910553","cluster_addr":"192.168.123.105:6805/3092910553","heartbeat_back_addr":"192.168.123.105:6809/3092910553","heartbeat_front_addr":"192.168.123.105:6807/3092910553","state":["exists","up"]},{"osd":2,"uuid":"f64d9f57-1660-4a5e-a3ad-5bb16faca664","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":1704659954},{"type":"v1","addr":"192.168.123.109:6809","nonce":1704659954}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":1704659954},{"type":"v1","addr":"192.168.123.109:6811","nonce":1704659954}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":1704659954},{"type":"v1","addr":"192.168.123.109:6815","nonce":1704659954}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":1704659954},{"type":"v1","addr":"192.168.123.109:6813","nonce":1704659954}]},"public_addr":"192.168.123.109:6809/1704659954","cluster_addr":"192.168.123.109:6811/1704659954","heartbeat_back_addr":"192.168.123.109:6815/1704659954","heartbeat_front_addr":"192.168.123.109:6813/1704659954","state":["exists","up"]},{"osd":3,"uuid":"22a3ff7c-9910-4190-bf2f-45d16541f7ef","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":519436065},{"type":"v1","addr":"192.168.123.105:6811","nonce":519436065}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":519436065},{"type":"v1","addr":"192.168.123.105:6813","nonce":519436065}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":519436065},{"type":"v1","addr":"192.168.123.105:6817","nonce":519436065}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":519436065},{"type":"v1","addr":"192.168.123.105:6815","nonce":519436065}]},"public_addr":"192.168.123.105:6811/519436065","cluster_addr":"192.168.123.105:6813/519436065","heartbeat_back_addr":"192.168.123.105:6817/519436065","heartbeat_front_addr":"192.168.123.105:6815/519436065","state":["exists","up"]},{"osd":4,"uuid":"a4ecd7d6-8367-42a2-ab73-88c375ccde3b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":2625184720},{"type":"v1","addr":"192.168.123.109:6817","nonce":2625184720}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":2625184720},{"type":"v1","addr":"192.168.123.109:6819","nonce":2625184720}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":2625184720},{"type":"v1","addr":"192.168.123.109:6823","nonce":2625184720}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":2625184720},{"type":"v1","addr":"192.168.123.109:6821","nonce":2625184720}]},"public_addr":"192.168.123.109:6817/2625184720","cluster_addr":"192.168.123.109:6819/2625184720","heartbeat_back_addr":"192.168.123.109:6823/2625184720","heartbeat_front_addr":"192.168.123.109:6821/2625184720","state":["exists","up"]},{"osd":5,"uuid":"1d064a57-509f-4d38-a4f5-0eded18ac3cd","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":3186003664},{"type":"v1","addr":"192.168.123.105:6819","nonce":3186003664}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":3186003664},{"type":"v1","addr":"192.168.123.105:6821","nonce":3186003664}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3186003664},{"type":"v1","addr":"192.168.123.105:6825","nonce":3186003664}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":3186003664},{"type":"v1","addr":"192.168.123.105:6823","nonce":3186003664}]},"public_addr":"192.168.123.105:6819/3186003664","cluster_addr":"192.168.123.105:6821/3186003664","heartbeat_back_addr":"192.168.123.105:6825/3186003664","heartbeat_front_addr":"192.168.123.105:6823/3186003664","state":["exists","up"]},{"osd":6,"uuid":"0448ea07-efa1-439b-a742-4885c961ceee","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2440052421},{"type":"v1","addr":"192.168.123.109:6825","nonce":2440052421}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2440052421},{"type":"v1","addr":"192.168.123.109:6827","nonce":2440052421}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":2440052421},{"type":"v1","addr":"192.168.123.109:6831","nonce":2440052421}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":2440052421},{"type":"v1","addr":"192.168.123.109:6829","nonce":2440052421}]},"public_addr":"192.168.123.109:6825/2440052421","cluster_addr":"192.168.123.109:6827/2440052421","heartbeat_back_addr":"192.168.123.109:6831/2440052421","heartbeat_front_addr":"192.168.123.109:6829/2440052421","state":["exists","up"]},{"osd":7,"uuid":"a27f4726-ebcc-445c-905f-5dd7d49f4c2e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":1075500241},{"type":"v1","addr":"192.168.123.105:6827","nonce":1075500241}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":1075500241},{"type":"v1","addr":"192.168.123.105:6829","nonce":1075500241}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6832","nonce":1075500241},{"type":"v1","addr":"192.168.123.105:6833","nonce":1075500241}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6830","nonce":1075500241},{"type":"v1","addr":"192.168.123.105:6831","nonce":1075500241}]},"public_addr":"192.168.123.105:6827/1075500241","cluster_addr":"192.168.123.105:6829/1075500241","heartbeat_back_addr":"192.168.123.105:6833/1075500241","heartbeat_front_addr":"192.168.123.105:6831/1075500241","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:51.759269+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:51.700684+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:53.133603+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:53.401080+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:54.998369+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:55.701631+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:55.592426+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:56.643744+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:0/1617152697":"2026-03-11T07:17:42.623714+0000","192.168.123.105:0/2420261393":"2026-03-11T07:16:50.364267+0000","192.168.123.105:0/1521610467":"2026-03-11T07:17:42.623714+0000","192.168.123.105:6801/2655024164":"2026-03-11T07:17:02.380950+0000","192.168.123.105:6800/1416584614":"2026-03-11T07:16:50.364267+0000","192.168.123.105:0/406807429":"2026-03-11T07:17:02.380950+0000","192.168.123.105:0/18068012":"2026-03-11T07:16:50.364267+0000","192.168.123.105:6801/1416584614":"2026-03-11T07:16:50.364267+0000","192.168.123.105:0/4063750006":"2026-03-11T07:16:50.364267+0000","192.168.123.105:0/1230580525":"2026-03-11T07:17:42.623714+0000","192.168.123.105:6800/2655024164":"2026-03-11T07:17:02.380950+0000","192.168.123.105:0/1685718356":"2026-03-11T07:17:02.380950+0000","192.168.123.105:6800/2765515849":"2026-03-11T07:17:42.623714+0000","192.168.123.105:6801/2765515849":"2026-03-11T07:17:42.623714+0000","192.168.123.105:0/2861108076":"2026-03-11T07:17:02.380950+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T07:19:05.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:05 vm05 bash[17520]: cluster 2026-03-10T07:19:04.645503+0000 mgr.vm05.wnsmpp (mgr.14195) 94 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:19:05.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:05 vm05 bash[17520]: cluster 2026-03-10T07:19:04.645503+0000 mgr.vm05.wnsmpp (mgr.14195) 94 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:19:05.832 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T07:18:58.677599+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '23', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T07:19:05.832 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd pool get .mgr pg_num 2026-03-10T07:19:06.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:06 vm05 bash[17520]: audit 2026-03-10T07:19:05.770311+0000 mon.vm05 (mon.0) 617 : audit [DBG] from='client.? 192.168.123.105:0/1868958290' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:19:06.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:06 vm05 bash[17520]: audit 2026-03-10T07:19:05.770311+0000 mon.vm05 (mon.0) 617 : audit [DBG] from='client.? 192.168.123.105:0/1868958290' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:19:07.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:06 vm09 bash[21099]: audit 2026-03-10T07:19:05.770311+0000 mon.vm05 (mon.0) 617 : audit [DBG] from='client.? 192.168.123.105:0/1868958290' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:19:07.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:06 vm09 bash[21099]: audit 2026-03-10T07:19:05.770311+0000 mon.vm05 (mon.0) 617 : audit [DBG] from='client.? 192.168.123.105:0/1868958290' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:19:08.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:07 vm09 bash[21099]: cluster 2026-03-10T07:19:06.645761+0000 mgr.vm05.wnsmpp (mgr.14195) 95 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 612 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:19:08.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:07 vm09 bash[21099]: cluster 2026-03-10T07:19:06.645761+0000 mgr.vm05.wnsmpp (mgr.14195) 95 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 612 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:19:08.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:07 vm05 bash[17520]: cluster 2026-03-10T07:19:06.645761+0000 mgr.vm05.wnsmpp (mgr.14195) 95 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 612 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:19:08.227 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:07 vm05 bash[17520]: cluster 2026-03-10T07:19:06.645761+0000 mgr.vm05.wnsmpp (mgr.14195) 95 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 612 MiB used, 159 GiB / 160 GiB avail 2026-03-10T07:19:09.539 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:09.856 INFO:teuthology.orchestra.run.vm05.stdout:pg_num: 1 2026-03-10T07:19:09.869 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:09 vm05 bash[17520]: cluster 2026-03-10T07:19:08.645980+0000 mgr.vm05.wnsmpp (mgr.14195) 96 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:09.869 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:09 vm05 bash[17520]: cluster 2026-03-10T07:19:08.645980+0000 mgr.vm05.wnsmpp (mgr.14195) 96 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:09.869 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:09 vm05 bash[17520]: audit 2026-03-10T07:19:09.767687+0000 mon.vm05 (mon.0) 618 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:09.869 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:09 vm05 bash[17520]: audit 2026-03-10T07:19:09.767687+0000 mon.vm05 (mon.0) 618 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:09.869 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:09 vm05 bash[17520]: audit 2026-03-10T07:19:09.773509+0000 mon.vm05 (mon.0) 619 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:09.869 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:09 vm05 bash[17520]: audit 2026-03-10T07:19:09.773509+0000 mon.vm05 (mon.0) 619 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:09.869 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:09 vm05 bash[17520]: audit 2026-03-10T07:19:09.774896+0000 mon.vm05 (mon.0) 620 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:19:09.869 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:09 vm05 bash[17520]: audit 2026-03-10T07:19:09.774896+0000 mon.vm05 (mon.0) 620 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:19:09.927 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T07:19:09.928 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T07:19:10.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:09 vm09 bash[21099]: cluster 2026-03-10T07:19:08.645980+0000 mgr.vm05.wnsmpp (mgr.14195) 96 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:10.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:09 vm09 bash[21099]: cluster 2026-03-10T07:19:08.645980+0000 mgr.vm05.wnsmpp (mgr.14195) 96 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:10.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:09 vm09 bash[21099]: audit 2026-03-10T07:19:09.767687+0000 mon.vm05 (mon.0) 618 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:10.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:09 vm09 bash[21099]: audit 2026-03-10T07:19:09.767687+0000 mon.vm05 (mon.0) 618 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:10.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:09 vm09 bash[21099]: audit 2026-03-10T07:19:09.773509+0000 mon.vm05 (mon.0) 619 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:10.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:09 vm09 bash[21099]: audit 2026-03-10T07:19:09.773509+0000 mon.vm05 (mon.0) 619 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:10.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:09 vm09 bash[21099]: audit 2026-03-10T07:19:09.774896+0000 mon.vm05 (mon.0) 620 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:19:10.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:09 vm09 bash[21099]: audit 2026-03-10T07:19:09.774896+0000 mon.vm05 (mon.0) 620 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: cephadm 2026-03-10T07:19:09.762063+0000 mgr.vm05.wnsmpp (mgr.14195) 97 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: cephadm 2026-03-10T07:19:09.762063+0000 mgr.vm05.wnsmpp (mgr.14195) 97 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:09.857436+0000 mon.vm05 (mon.0) 621 : audit [DBG] from='client.? 192.168.123.105:0/1421648614' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:09.857436+0000 mon.vm05 (mon.0) 621 : audit [DBG] from='client.? 192.168.123.105:0/1421648614' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: cephadm 2026-03-10T07:19:10.235999+0000 mgr.vm05.wnsmpp (mgr.14195) 98 : cephadm [INF] Detected new or changed devices on vm09 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: cephadm 2026-03-10T07:19:10.235999+0000 mgr.vm05.wnsmpp (mgr.14195) 98 : cephadm [INF] Detected new or changed devices on vm09 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.242766+0000 mon.vm05 (mon.0) 622 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.242766+0000 mon.vm05 (mon.0) 622 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.249128+0000 mon.vm05 (mon.0) 623 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.249128+0000 mon.vm05 (mon.0) 623 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.250379+0000 mon.vm05 (mon.0) 624 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.250379+0000 mon.vm05 (mon.0) 624 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.251216+0000 mon.vm05 (mon.0) 625 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.251216+0000 mon.vm05 (mon.0) 625 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.251749+0000 mon.vm05 (mon.0) 626 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.251749+0000 mon.vm05 (mon.0) 626 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.256330+0000 mon.vm05 (mon.0) 627 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.256330+0000 mon.vm05 (mon.0) 627 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.258522+0000 mon.vm05 (mon.0) 628 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:19:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:10 vm09 bash[21099]: audit 2026-03-10T07:19:10.258522+0000 mon.vm05 (mon.0) 628 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: cephadm 2026-03-10T07:19:09.762063+0000 mgr.vm05.wnsmpp (mgr.14195) 97 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: cephadm 2026-03-10T07:19:09.762063+0000 mgr.vm05.wnsmpp (mgr.14195) 97 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:09.857436+0000 mon.vm05 (mon.0) 621 : audit [DBG] from='client.? 192.168.123.105:0/1421648614' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:09.857436+0000 mon.vm05 (mon.0) 621 : audit [DBG] from='client.? 192.168.123.105:0/1421648614' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: cephadm 2026-03-10T07:19:10.235999+0000 mgr.vm05.wnsmpp (mgr.14195) 98 : cephadm [INF] Detected new or changed devices on vm09 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: cephadm 2026-03-10T07:19:10.235999+0000 mgr.vm05.wnsmpp (mgr.14195) 98 : cephadm [INF] Detected new or changed devices on vm09 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.242766+0000 mon.vm05 (mon.0) 622 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.242766+0000 mon.vm05 (mon.0) 622 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.249128+0000 mon.vm05 (mon.0) 623 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.249128+0000 mon.vm05 (mon.0) 623 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.250379+0000 mon.vm05 (mon.0) 624 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.250379+0000 mon.vm05 (mon.0) 624 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.251216+0000 mon.vm05 (mon.0) 625 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.251216+0000 mon.vm05 (mon.0) 625 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.251749+0000 mon.vm05 (mon.0) 626 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.251749+0000 mon.vm05 (mon.0) 626 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.256330+0000 mon.vm05 (mon.0) 627 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.256330+0000 mon.vm05 (mon.0) 627 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.258522+0000 mon.vm05 (mon.0) 628 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:19:11.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:10 vm05 bash[17520]: audit 2026-03-10T07:19:10.258522+0000 mon.vm05 (mon.0) 628 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:19:12.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:12 vm09 bash[21099]: cluster 2026-03-10T07:19:10.646249+0000 mgr.vm05.wnsmpp (mgr.14195) 99 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:12.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:12 vm09 bash[21099]: cluster 2026-03-10T07:19:10.646249+0000 mgr.vm05.wnsmpp (mgr.14195) 99 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:12.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:12 vm05 bash[17520]: cluster 2026-03-10T07:19:10.646249+0000 mgr.vm05.wnsmpp (mgr.14195) 99 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:12.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:12 vm05 bash[17520]: cluster 2026-03-10T07:19:10.646249+0000 mgr.vm05.wnsmpp (mgr.14195) 99 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:13.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:13 vm09 bash[21099]: audit 2026-03-10T07:19:12.673834+0000 mon.vm05 (mon.0) 629 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:13.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:13 vm09 bash[21099]: audit 2026-03-10T07:19:12.673834+0000 mon.vm05 (mon.0) 629 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:13.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:13 vm05 bash[17520]: audit 2026-03-10T07:19:12.673834+0000 mon.vm05 (mon.0) 629 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:13.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:13 vm05 bash[17520]: audit 2026-03-10T07:19:12.673834+0000 mon.vm05 (mon.0) 629 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:14.572 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:14.588 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:14 vm05 bash[17520]: cluster 2026-03-10T07:19:12.646541+0000 mgr.vm05.wnsmpp (mgr.14195) 100 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:14.588 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:14 vm05 bash[17520]: cluster 2026-03-10T07:19:12.646541+0000 mgr.vm05.wnsmpp (mgr.14195) 100 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:14.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:14 vm09 bash[21099]: cluster 2026-03-10T07:19:12.646541+0000 mgr.vm05.wnsmpp (mgr.14195) 100 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:14.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:14 vm09 bash[21099]: cluster 2026-03-10T07:19:12.646541+0000 mgr.vm05.wnsmpp (mgr.14195) 100 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:14.873 INFO:teuthology.orchestra.run.vm05.stdout:[client.0] 2026-03-10T07:19:14.873 INFO:teuthology.orchestra.run.vm05.stdout: key = AQDyxa9pb8bHMxAAsua+GdZS6S9Pp9gViwSNoQ== 2026-03-10T07:19:14.934 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T07:19:14.934 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T07:19:14.934 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T07:19:14.949 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T07:19:15.278 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:15 vm09 bash[21099]: audit 2026-03-10T07:19:14.868588+0000 mon.vm05 (mon.0) 630 : audit [INF] from='client.? 192.168.123.105:0/34426252' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:19:15.586 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:15 vm05 bash[17520]: audit 2026-03-10T07:19:14.868588+0000 mon.vm05 (mon.0) 630 : audit [INF] from='client.? 192.168.123.105:0/34426252' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:19:15.586 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:15 vm05 bash[17520]: audit 2026-03-10T07:19:14.868588+0000 mon.vm05 (mon.0) 630 : audit [INF] from='client.? 192.168.123.105:0/34426252' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:19:15.586 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:15 vm05 bash[17520]: audit 2026-03-10T07:19:14.871669+0000 mon.vm05 (mon.0) 631 : audit [INF] from='client.? 192.168.123.105:0/34426252' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:19:15.586 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:15 vm05 bash[17520]: audit 2026-03-10T07:19:14.871669+0000 mon.vm05 (mon.0) 631 : audit [INF] from='client.? 192.168.123.105:0/34426252' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:19:15.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:15 vm09 bash[21099]: audit 2026-03-10T07:19:14.868588+0000 mon.vm05 (mon.0) 630 : audit [INF] from='client.? 192.168.123.105:0/34426252' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:19:15.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:15 vm09 bash[21099]: audit 2026-03-10T07:19:14.871669+0000 mon.vm05 (mon.0) 631 : audit [INF] from='client.? 192.168.123.105:0/34426252' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:19:15.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:15 vm09 bash[21099]: audit 2026-03-10T07:19:14.871669+0000 mon.vm05 (mon.0) 631 : audit [INF] from='client.? 192.168.123.105:0/34426252' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:19:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:16 vm09 bash[21099]: cluster 2026-03-10T07:19:14.646813+0000 mgr.vm05.wnsmpp (mgr.14195) 101 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:16 vm09 bash[21099]: cluster 2026-03-10T07:19:14.646813+0000 mgr.vm05.wnsmpp (mgr.14195) 101 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:16.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:16 vm05 bash[17520]: cluster 2026-03-10T07:19:14.646813+0000 mgr.vm05.wnsmpp (mgr.14195) 101 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:16.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:16 vm05 bash[17520]: cluster 2026-03-10T07:19:14.646813+0000 mgr.vm05.wnsmpp (mgr.14195) 101 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:18.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:18 vm09 bash[21099]: cluster 2026-03-10T07:19:16.647104+0000 mgr.vm05.wnsmpp (mgr.14195) 102 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:18.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:18 vm09 bash[21099]: cluster 2026-03-10T07:19:16.647104+0000 mgr.vm05.wnsmpp (mgr.14195) 102 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:18.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:18 vm05 bash[17520]: cluster 2026-03-10T07:19:16.647104+0000 mgr.vm05.wnsmpp (mgr.14195) 102 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:18.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:18 vm05 bash[17520]: cluster 2026-03-10T07:19:16.647104+0000 mgr.vm05.wnsmpp (mgr.14195) 102 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:19.588 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm09/config 2026-03-10T07:19:19.885 INFO:teuthology.orchestra.run.vm09.stdout:[client.1] 2026-03-10T07:19:19.885 INFO:teuthology.orchestra.run.vm09.stdout: key = AQD3xa9pQlOPNBAARMkXOt6tXAn4S+NneeUe/w== 2026-03-10T07:19:19.955 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T07:19:19.955 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T07:19:19.955 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T07:19:19.969 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T07:19:19.969 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T07:19:19.969 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph mgr dump --format=json 2026-03-10T07:19:20.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:20 vm05 bash[17520]: cluster 2026-03-10T07:19:18.647379+0000 mgr.vm05.wnsmpp (mgr.14195) 103 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:20.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:20 vm05 bash[17520]: cluster 2026-03-10T07:19:18.647379+0000 mgr.vm05.wnsmpp (mgr.14195) 103 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:20.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:20 vm05 bash[17520]: audit 2026-03-10T07:19:19.877369+0000 mon.vm09 (mon.1) 20 : audit [INF] from='client.? 192.168.123.109:0/2404747354' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:19:20.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:20 vm05 bash[17520]: audit 2026-03-10T07:19:19.877369+0000 mon.vm09 (mon.1) 20 : audit [INF] from='client.? 192.168.123.109:0/2404747354' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:19:20.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:20 vm05 bash[17520]: audit 2026-03-10T07:19:19.881686+0000 mon.vm05 (mon.0) 632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:19:20.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:20 vm05 bash[17520]: audit 2026-03-10T07:19:19.881686+0000 mon.vm05 (mon.0) 632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:19:20.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:20 vm05 bash[17520]: audit 2026-03-10T07:19:19.884344+0000 mon.vm05 (mon.0) 633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:19:20.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:20 vm05 bash[17520]: audit 2026-03-10T07:19:19.884344+0000 mon.vm05 (mon.0) 633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:19:20.768 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:20 vm09 bash[21099]: cluster 2026-03-10T07:19:18.647379+0000 mgr.vm05.wnsmpp (mgr.14195) 103 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:20.768 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:20 vm09 bash[21099]: cluster 2026-03-10T07:19:18.647379+0000 mgr.vm05.wnsmpp (mgr.14195) 103 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:20.768 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:20 vm09 bash[21099]: audit 2026-03-10T07:19:19.877369+0000 mon.vm09 (mon.1) 20 : audit [INF] from='client.? 192.168.123.109:0/2404747354' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:19:20.768 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:20 vm09 bash[21099]: audit 2026-03-10T07:19:19.877369+0000 mon.vm09 (mon.1) 20 : audit [INF] from='client.? 192.168.123.109:0/2404747354' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:19:20.768 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:20 vm09 bash[21099]: audit 2026-03-10T07:19:19.881686+0000 mon.vm05 (mon.0) 632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:19:20.768 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:20 vm09 bash[21099]: audit 2026-03-10T07:19:19.881686+0000 mon.vm05 (mon.0) 632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:19:20.768 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:20 vm09 bash[21099]: audit 2026-03-10T07:19:19.884344+0000 mon.vm05 (mon.0) 633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:19:20.768 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:20 vm09 bash[21099]: audit 2026-03-10T07:19:19.884344+0000 mon.vm05 (mon.0) 633 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:19:21.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:21 vm09 bash[21099]: cluster 2026-03-10T07:19:20.647635+0000 mgr.vm05.wnsmpp (mgr.14195) 104 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:21.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:21 vm09 bash[21099]: cluster 2026-03-10T07:19:20.647635+0000 mgr.vm05.wnsmpp (mgr.14195) 104 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:21.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:21 vm05 bash[17520]: cluster 2026-03-10T07:19:20.647635+0000 mgr.vm05.wnsmpp (mgr.14195) 104 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:21.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:21 vm05 bash[17520]: cluster 2026-03-10T07:19:20.647635+0000 mgr.vm05.wnsmpp (mgr.14195) 104 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:23.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:23 vm09 bash[21099]: cluster 2026-03-10T07:19:22.647918+0000 mgr.vm05.wnsmpp (mgr.14195) 105 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:23.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:23 vm09 bash[21099]: cluster 2026-03-10T07:19:22.647918+0000 mgr.vm05.wnsmpp (mgr.14195) 105 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:23.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:23 vm05 bash[17520]: cluster 2026-03-10T07:19:22.647918+0000 mgr.vm05.wnsmpp (mgr.14195) 105 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:23.979 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:23 vm05 bash[17520]: cluster 2026-03-10T07:19:22.647918+0000 mgr.vm05.wnsmpp (mgr.14195) 105 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:24.611 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:25.494 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:19:25.556 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":18,"flags":0,"active_gid":14195,"active_name":"vm05.wnsmpp","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":1621625648},{"type":"v1","addr":"192.168.123.105:6801","nonce":1621625648}]},"active_addr":"192.168.123.105:6801/1621625648","active_change":"2026-03-10T07:17:42.623818+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":14212,"name":"vm09.rfdvwa","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.105:8443/","prometheus":"http://192.168.123.105:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":5,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":3166560416}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":3101416457}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":562122267}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":287318172}]}]} 2026-03-10T07:19:25.558 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T07:19:25.558 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T07:19:25.558 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd dump --format=json 2026-03-10T07:19:26.095 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:26 vm05 bash[17520]: cluster 2026-03-10T07:19:24.648209+0000 mgr.vm05.wnsmpp (mgr.14195) 106 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:26.095 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:26 vm05 bash[17520]: cluster 2026-03-10T07:19:24.648209+0000 mgr.vm05.wnsmpp (mgr.14195) 106 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:26.095 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:26 vm05 bash[17520]: audit 2026-03-10T07:19:25.485947+0000 mon.vm05 (mon.0) 634 : audit [DBG] from='client.? 192.168.123.105:0/1007912491' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T07:19:26.095 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:26 vm05 bash[17520]: audit 2026-03-10T07:19:25.485947+0000 mon.vm05 (mon.0) 634 : audit [DBG] from='client.? 192.168.123.105:0/1007912491' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T07:19:26.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:26 vm09 bash[21099]: cluster 2026-03-10T07:19:24.648209+0000 mgr.vm05.wnsmpp (mgr.14195) 106 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:26.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:26 vm09 bash[21099]: cluster 2026-03-10T07:19:24.648209+0000 mgr.vm05.wnsmpp (mgr.14195) 106 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:26.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:26 vm09 bash[21099]: audit 2026-03-10T07:19:25.485947+0000 mon.vm05 (mon.0) 634 : audit [DBG] from='client.? 192.168.123.105:0/1007912491' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T07:19:26.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:26 vm09 bash[21099]: audit 2026-03-10T07:19:25.485947+0000 mon.vm05 (mon.0) 634 : audit [DBG] from='client.? 192.168.123.105:0/1007912491' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T07:19:28.423 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:28 vm09 bash[21099]: cluster 2026-03-10T07:19:26.648458+0000 mgr.vm05.wnsmpp (mgr.14195) 107 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:28 vm09 bash[21099]: cluster 2026-03-10T07:19:26.648458+0000 mgr.vm05.wnsmpp (mgr.14195) 107 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:28 vm09 bash[21099]: audit 2026-03-10T07:19:27.674647+0000 mon.vm05 (mon.0) 635 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:28.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:28 vm09 bash[21099]: audit 2026-03-10T07:19:27.674647+0000 mon.vm05 (mon.0) 635 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:28 vm05 bash[17520]: cluster 2026-03-10T07:19:26.648458+0000 mgr.vm05.wnsmpp (mgr.14195) 107 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:28 vm05 bash[17520]: cluster 2026-03-10T07:19:26.648458+0000 mgr.vm05.wnsmpp (mgr.14195) 107 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:28 vm05 bash[17520]: audit 2026-03-10T07:19:27.674647+0000 mon.vm05 (mon.0) 635 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:28.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:28 vm05 bash[17520]: audit 2026-03-10T07:19:27.674647+0000 mon.vm05 (mon.0) 635 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:30.201 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:30.475 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:19:30.475 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":23,"fsid":"f0f57d3c-1c50-11f1-837e-f755e850132e","created":"2026-03-10T07:16:39.907350+0000","modified":"2026-03-10T07:19:01.385461+0000","last_up_change":"2026-03-10T07:18:59.341469+0000","last_in_change":"2026-03-10T07:18:36.269959+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T07:18:58.677599+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"23","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":3105919620},{"type":"v1","addr":"192.168.123.109:6801","nonce":3105919620}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":3105919620},{"type":"v1","addr":"192.168.123.109:6803","nonce":3105919620}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":3105919620},{"type":"v1","addr":"192.168.123.109:6807","nonce":3105919620}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":3105919620},{"type":"v1","addr":"192.168.123.109:6805","nonce":3105919620}]},"public_addr":"192.168.123.109:6801/3105919620","cluster_addr":"192.168.123.109:6803/3105919620","heartbeat_back_addr":"192.168.123.109:6807/3105919620","heartbeat_front_addr":"192.168.123.109:6805/3105919620","state":["exists","up"]},{"osd":1,"uuid":"165a1577-c628-4924-8467-6ee181e4ae8f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":3092910553},{"type":"v1","addr":"192.168.123.105:6803","nonce":3092910553}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":3092910553},{"type":"v1","addr":"192.168.123.105:6805","nonce":3092910553}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":3092910553},{"type":"v1","addr":"192.168.123.105:6809","nonce":3092910553}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":3092910553},{"type":"v1","addr":"192.168.123.105:6807","nonce":3092910553}]},"public_addr":"192.168.123.105:6803/3092910553","cluster_addr":"192.168.123.105:6805/3092910553","heartbeat_back_addr":"192.168.123.105:6809/3092910553","heartbeat_front_addr":"192.168.123.105:6807/3092910553","state":["exists","up"]},{"osd":2,"uuid":"f64d9f57-1660-4a5e-a3ad-5bb16faca664","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":1704659954},{"type":"v1","addr":"192.168.123.109:6809","nonce":1704659954}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":1704659954},{"type":"v1","addr":"192.168.123.109:6811","nonce":1704659954}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":1704659954},{"type":"v1","addr":"192.168.123.109:6815","nonce":1704659954}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":1704659954},{"type":"v1","addr":"192.168.123.109:6813","nonce":1704659954}]},"public_addr":"192.168.123.109:6809/1704659954","cluster_addr":"192.168.123.109:6811/1704659954","heartbeat_back_addr":"192.168.123.109:6815/1704659954","heartbeat_front_addr":"192.168.123.109:6813/1704659954","state":["exists","up"]},{"osd":3,"uuid":"22a3ff7c-9910-4190-bf2f-45d16541f7ef","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":519436065},{"type":"v1","addr":"192.168.123.105:6811","nonce":519436065}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":519436065},{"type":"v1","addr":"192.168.123.105:6813","nonce":519436065}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":519436065},{"type":"v1","addr":"192.168.123.105:6817","nonce":519436065}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":519436065},{"type":"v1","addr":"192.168.123.105:6815","nonce":519436065}]},"public_addr":"192.168.123.105:6811/519436065","cluster_addr":"192.168.123.105:6813/519436065","heartbeat_back_addr":"192.168.123.105:6817/519436065","heartbeat_front_addr":"192.168.123.105:6815/519436065","state":["exists","up"]},{"osd":4,"uuid":"a4ecd7d6-8367-42a2-ab73-88c375ccde3b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":2625184720},{"type":"v1","addr":"192.168.123.109:6817","nonce":2625184720}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":2625184720},{"type":"v1","addr":"192.168.123.109:6819","nonce":2625184720}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":2625184720},{"type":"v1","addr":"192.168.123.109:6823","nonce":2625184720}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":2625184720},{"type":"v1","addr":"192.168.123.109:6821","nonce":2625184720}]},"public_addr":"192.168.123.109:6817/2625184720","cluster_addr":"192.168.123.109:6819/2625184720","heartbeat_back_addr":"192.168.123.109:6823/2625184720","heartbeat_front_addr":"192.168.123.109:6821/2625184720","state":["exists","up"]},{"osd":5,"uuid":"1d064a57-509f-4d38-a4f5-0eded18ac3cd","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":3186003664},{"type":"v1","addr":"192.168.123.105:6819","nonce":3186003664}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":3186003664},{"type":"v1","addr":"192.168.123.105:6821","nonce":3186003664}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3186003664},{"type":"v1","addr":"192.168.123.105:6825","nonce":3186003664}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":3186003664},{"type":"v1","addr":"192.168.123.105:6823","nonce":3186003664}]},"public_addr":"192.168.123.105:6819/3186003664","cluster_addr":"192.168.123.105:6821/3186003664","heartbeat_back_addr":"192.168.123.105:6825/3186003664","heartbeat_front_addr":"192.168.123.105:6823/3186003664","state":["exists","up"]},{"osd":6,"uuid":"0448ea07-efa1-439b-a742-4885c961ceee","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2440052421},{"type":"v1","addr":"192.168.123.109:6825","nonce":2440052421}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2440052421},{"type":"v1","addr":"192.168.123.109:6827","nonce":2440052421}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":2440052421},{"type":"v1","addr":"192.168.123.109:6831","nonce":2440052421}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":2440052421},{"type":"v1","addr":"192.168.123.109:6829","nonce":2440052421}]},"public_addr":"192.168.123.109:6825/2440052421","cluster_addr":"192.168.123.109:6827/2440052421","heartbeat_back_addr":"192.168.123.109:6831/2440052421","heartbeat_front_addr":"192.168.123.109:6829/2440052421","state":["exists","up"]},{"osd":7,"uuid":"a27f4726-ebcc-445c-905f-5dd7d49f4c2e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":1075500241},{"type":"v1","addr":"192.168.123.105:6827","nonce":1075500241}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":1075500241},{"type":"v1","addr":"192.168.123.105:6829","nonce":1075500241}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6832","nonce":1075500241},{"type":"v1","addr":"192.168.123.105:6833","nonce":1075500241}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6830","nonce":1075500241},{"type":"v1","addr":"192.168.123.105:6831","nonce":1075500241}]},"public_addr":"192.168.123.105:6827/1075500241","cluster_addr":"192.168.123.105:6829/1075500241","heartbeat_back_addr":"192.168.123.105:6833/1075500241","heartbeat_front_addr":"192.168.123.105:6831/1075500241","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:51.759269+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:51.700684+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:53.133603+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:53.401080+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:54.998369+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:55.701631+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:55.592426+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:56.643744+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:0/1617152697":"2026-03-11T07:17:42.623714+0000","192.168.123.105:0/2420261393":"2026-03-11T07:16:50.364267+0000","192.168.123.105:0/1521610467":"2026-03-11T07:17:42.623714+0000","192.168.123.105:6801/2655024164":"2026-03-11T07:17:02.380950+0000","192.168.123.105:6800/1416584614":"2026-03-11T07:16:50.364267+0000","192.168.123.105:0/406807429":"2026-03-11T07:17:02.380950+0000","192.168.123.105:0/18068012":"2026-03-11T07:16:50.364267+0000","192.168.123.105:6801/1416584614":"2026-03-11T07:16:50.364267+0000","192.168.123.105:0/4063750006":"2026-03-11T07:16:50.364267+0000","192.168.123.105:0/1230580525":"2026-03-11T07:17:42.623714+0000","192.168.123.105:6800/2655024164":"2026-03-11T07:17:02.380950+0000","192.168.123.105:0/1685718356":"2026-03-11T07:17:02.380950+0000","192.168.123.105:6800/2765515849":"2026-03-11T07:17:42.623714+0000","192.168.123.105:6801/2765515849":"2026-03-11T07:17:42.623714+0000","192.168.123.105:0/2861108076":"2026-03-11T07:17:02.380950+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T07:19:30.491 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:30 vm05 bash[17520]: cluster 2026-03-10T07:19:28.648693+0000 mgr.vm05.wnsmpp (mgr.14195) 108 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:30.491 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:30 vm05 bash[17520]: cluster 2026-03-10T07:19:28.648693+0000 mgr.vm05.wnsmpp (mgr.14195) 108 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:30.587 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T07:19:30.587 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd dump --format=json 2026-03-10T07:19:30.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:30 vm09 bash[21099]: cluster 2026-03-10T07:19:28.648693+0000 mgr.vm05.wnsmpp (mgr.14195) 108 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:30.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:30 vm09 bash[21099]: cluster 2026-03-10T07:19:28.648693+0000 mgr.vm05.wnsmpp (mgr.14195) 108 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:31.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:31 vm09 bash[21099]: audit 2026-03-10T07:19:30.475388+0000 mon.vm05 (mon.0) 636 : audit [DBG] from='client.? 192.168.123.105:0/1848750052' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:19:31.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:31 vm09 bash[21099]: audit 2026-03-10T07:19:30.475388+0000 mon.vm05 (mon.0) 636 : audit [DBG] from='client.? 192.168.123.105:0/1848750052' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:19:31.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:31 vm05 bash[17520]: audit 2026-03-10T07:19:30.475388+0000 mon.vm05 (mon.0) 636 : audit [DBG] from='client.? 192.168.123.105:0/1848750052' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:19:31.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:31 vm05 bash[17520]: audit 2026-03-10T07:19:30.475388+0000 mon.vm05 (mon.0) 636 : audit [DBG] from='client.? 192.168.123.105:0/1848750052' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:19:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:32 vm09 bash[21099]: cluster 2026-03-10T07:19:30.649004+0000 mgr.vm05.wnsmpp (mgr.14195) 109 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:32.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:32 vm09 bash[21099]: cluster 2026-03-10T07:19:30.649004+0000 mgr.vm05.wnsmpp (mgr.14195) 109 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:32.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:32 vm05 bash[17520]: cluster 2026-03-10T07:19:30.649004+0000 mgr.vm05.wnsmpp (mgr.14195) 109 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:32.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:32 vm05 bash[17520]: cluster 2026-03-10T07:19:30.649004+0000 mgr.vm05.wnsmpp (mgr.14195) 109 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:34.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:34 vm09 bash[21099]: cluster 2026-03-10T07:19:32.649276+0000 mgr.vm05.wnsmpp (mgr.14195) 110 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:34.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:34 vm09 bash[21099]: cluster 2026-03-10T07:19:32.649276+0000 mgr.vm05.wnsmpp (mgr.14195) 110 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:34.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:34 vm05 bash[17520]: cluster 2026-03-10T07:19:32.649276+0000 mgr.vm05.wnsmpp (mgr.14195) 110 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:34.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:34 vm05 bash[17520]: cluster 2026-03-10T07:19:32.649276+0000 mgr.vm05.wnsmpp (mgr.14195) 110 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:35.245 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:35.499 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:19:35.499 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":23,"fsid":"f0f57d3c-1c50-11f1-837e-f755e850132e","created":"2026-03-10T07:16:39.907350+0000","modified":"2026-03-10T07:19:01.385461+0000","last_up_change":"2026-03-10T07:18:59.341469+0000","last_in_change":"2026-03-10T07:18:36.269959+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T07:18:58.677599+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"23","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"3e7fdc0d-cbc2-4007-9509-71bc5e3d1f39","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":3105919620},{"type":"v1","addr":"192.168.123.109:6801","nonce":3105919620}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":3105919620},{"type":"v1","addr":"192.168.123.109:6803","nonce":3105919620}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":3105919620},{"type":"v1","addr":"192.168.123.109:6807","nonce":3105919620}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":3105919620},{"type":"v1","addr":"192.168.123.109:6805","nonce":3105919620}]},"public_addr":"192.168.123.109:6801/3105919620","cluster_addr":"192.168.123.109:6803/3105919620","heartbeat_back_addr":"192.168.123.109:6807/3105919620","heartbeat_front_addr":"192.168.123.109:6805/3105919620","state":["exists","up"]},{"osd":1,"uuid":"165a1577-c628-4924-8467-6ee181e4ae8f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":3092910553},{"type":"v1","addr":"192.168.123.105:6803","nonce":3092910553}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":3092910553},{"type":"v1","addr":"192.168.123.105:6805","nonce":3092910553}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":3092910553},{"type":"v1","addr":"192.168.123.105:6809","nonce":3092910553}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":3092910553},{"type":"v1","addr":"192.168.123.105:6807","nonce":3092910553}]},"public_addr":"192.168.123.105:6803/3092910553","cluster_addr":"192.168.123.105:6805/3092910553","heartbeat_back_addr":"192.168.123.105:6809/3092910553","heartbeat_front_addr":"192.168.123.105:6807/3092910553","state":["exists","up"]},{"osd":2,"uuid":"f64d9f57-1660-4a5e-a3ad-5bb16faca664","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":1704659954},{"type":"v1","addr":"192.168.123.109:6809","nonce":1704659954}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":1704659954},{"type":"v1","addr":"192.168.123.109:6811","nonce":1704659954}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":1704659954},{"type":"v1","addr":"192.168.123.109:6815","nonce":1704659954}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":1704659954},{"type":"v1","addr":"192.168.123.109:6813","nonce":1704659954}]},"public_addr":"192.168.123.109:6809/1704659954","cluster_addr":"192.168.123.109:6811/1704659954","heartbeat_back_addr":"192.168.123.109:6815/1704659954","heartbeat_front_addr":"192.168.123.109:6813/1704659954","state":["exists","up"]},{"osd":3,"uuid":"22a3ff7c-9910-4190-bf2f-45d16541f7ef","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":519436065},{"type":"v1","addr":"192.168.123.105:6811","nonce":519436065}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":519436065},{"type":"v1","addr":"192.168.123.105:6813","nonce":519436065}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":519436065},{"type":"v1","addr":"192.168.123.105:6817","nonce":519436065}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":519436065},{"type":"v1","addr":"192.168.123.105:6815","nonce":519436065}]},"public_addr":"192.168.123.105:6811/519436065","cluster_addr":"192.168.123.105:6813/519436065","heartbeat_back_addr":"192.168.123.105:6817/519436065","heartbeat_front_addr":"192.168.123.105:6815/519436065","state":["exists","up"]},{"osd":4,"uuid":"a4ecd7d6-8367-42a2-ab73-88c375ccde3b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":2625184720},{"type":"v1","addr":"192.168.123.109:6817","nonce":2625184720}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":2625184720},{"type":"v1","addr":"192.168.123.109:6819","nonce":2625184720}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":2625184720},{"type":"v1","addr":"192.168.123.109:6823","nonce":2625184720}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":2625184720},{"type":"v1","addr":"192.168.123.109:6821","nonce":2625184720}]},"public_addr":"192.168.123.109:6817/2625184720","cluster_addr":"192.168.123.109:6819/2625184720","heartbeat_back_addr":"192.168.123.109:6823/2625184720","heartbeat_front_addr":"192.168.123.109:6821/2625184720","state":["exists","up"]},{"osd":5,"uuid":"1d064a57-509f-4d38-a4f5-0eded18ac3cd","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":3186003664},{"type":"v1","addr":"192.168.123.105:6819","nonce":3186003664}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":3186003664},{"type":"v1","addr":"192.168.123.105:6821","nonce":3186003664}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3186003664},{"type":"v1","addr":"192.168.123.105:6825","nonce":3186003664}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":3186003664},{"type":"v1","addr":"192.168.123.105:6823","nonce":3186003664}]},"public_addr":"192.168.123.105:6819/3186003664","cluster_addr":"192.168.123.105:6821/3186003664","heartbeat_back_addr":"192.168.123.105:6825/3186003664","heartbeat_front_addr":"192.168.123.105:6823/3186003664","state":["exists","up"]},{"osd":6,"uuid":"0448ea07-efa1-439b-a742-4885c961ceee","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2440052421},{"type":"v1","addr":"192.168.123.109:6825","nonce":2440052421}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2440052421},{"type":"v1","addr":"192.168.123.109:6827","nonce":2440052421}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":2440052421},{"type":"v1","addr":"192.168.123.109:6831","nonce":2440052421}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":2440052421},{"type":"v1","addr":"192.168.123.109:6829","nonce":2440052421}]},"public_addr":"192.168.123.109:6825/2440052421","cluster_addr":"192.168.123.109:6827/2440052421","heartbeat_back_addr":"192.168.123.109:6831/2440052421","heartbeat_front_addr":"192.168.123.109:6829/2440052421","state":["exists","up"]},{"osd":7,"uuid":"a27f4726-ebcc-445c-905f-5dd7d49f4c2e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":1075500241},{"type":"v1","addr":"192.168.123.105:6827","nonce":1075500241}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":1075500241},{"type":"v1","addr":"192.168.123.105:6829","nonce":1075500241}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6832","nonce":1075500241},{"type":"v1","addr":"192.168.123.105:6833","nonce":1075500241}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6830","nonce":1075500241},{"type":"v1","addr":"192.168.123.105:6831","nonce":1075500241}]},"public_addr":"192.168.123.105:6827/1075500241","cluster_addr":"192.168.123.105:6829/1075500241","heartbeat_back_addr":"192.168.123.105:6833/1075500241","heartbeat_front_addr":"192.168.123.105:6831/1075500241","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:51.759269+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:51.700684+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:53.133603+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:53.401080+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:54.998369+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:55.701631+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:55.592426+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:18:56.643744+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:0/1617152697":"2026-03-11T07:17:42.623714+0000","192.168.123.105:0/2420261393":"2026-03-11T07:16:50.364267+0000","192.168.123.105:0/1521610467":"2026-03-11T07:17:42.623714+0000","192.168.123.105:6801/2655024164":"2026-03-11T07:17:02.380950+0000","192.168.123.105:6800/1416584614":"2026-03-11T07:16:50.364267+0000","192.168.123.105:0/406807429":"2026-03-11T07:17:02.380950+0000","192.168.123.105:0/18068012":"2026-03-11T07:16:50.364267+0000","192.168.123.105:6801/1416584614":"2026-03-11T07:16:50.364267+0000","192.168.123.105:0/4063750006":"2026-03-11T07:16:50.364267+0000","192.168.123.105:0/1230580525":"2026-03-11T07:17:42.623714+0000","192.168.123.105:6800/2655024164":"2026-03-11T07:17:02.380950+0000","192.168.123.105:0/1685718356":"2026-03-11T07:17:02.380950+0000","192.168.123.105:6800/2765515849":"2026-03-11T07:17:42.623714+0000","192.168.123.105:6801/2765515849":"2026-03-11T07:17:42.623714+0000","192.168.123.105:0/2861108076":"2026-03-11T07:17:02.380950+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T07:19:35.558 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph tell osd.0 flush_pg_stats 2026-03-10T07:19:35.558 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph tell osd.1 flush_pg_stats 2026-03-10T07:19:35.558 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph tell osd.2 flush_pg_stats 2026-03-10T07:19:35.558 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph tell osd.3 flush_pg_stats 2026-03-10T07:19:35.559 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph tell osd.4 flush_pg_stats 2026-03-10T07:19:35.559 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph tell osd.5 flush_pg_stats 2026-03-10T07:19:35.559 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph tell osd.6 flush_pg_stats 2026-03-10T07:19:35.559 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph tell osd.7 flush_pg_stats 2026-03-10T07:19:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:36 vm09 bash[21099]: cluster 2026-03-10T07:19:34.649568+0000 mgr.vm05.wnsmpp (mgr.14195) 111 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:36 vm09 bash[21099]: cluster 2026-03-10T07:19:34.649568+0000 mgr.vm05.wnsmpp (mgr.14195) 111 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:36 vm09 bash[21099]: audit 2026-03-10T07:19:35.499880+0000 mon.vm05 (mon.0) 637 : audit [DBG] from='client.? 192.168.123.105:0/2323259113' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:19:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:36 vm09 bash[21099]: audit 2026-03-10T07:19:35.499880+0000 mon.vm05 (mon.0) 637 : audit [DBG] from='client.? 192.168.123.105:0/2323259113' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:19:36.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:36 vm05 bash[17520]: cluster 2026-03-10T07:19:34.649568+0000 mgr.vm05.wnsmpp (mgr.14195) 111 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:36.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:36 vm05 bash[17520]: cluster 2026-03-10T07:19:34.649568+0000 mgr.vm05.wnsmpp (mgr.14195) 111 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:36.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:36 vm05 bash[17520]: audit 2026-03-10T07:19:35.499880+0000 mon.vm05 (mon.0) 637 : audit [DBG] from='client.? 192.168.123.105:0/2323259113' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:19:36.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:36 vm05 bash[17520]: audit 2026-03-10T07:19:35.499880+0000 mon.vm05 (mon.0) 637 : audit [DBG] from='client.? 192.168.123.105:0/2323259113' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:19:38.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:38 vm09 bash[21099]: cluster 2026-03-10T07:19:36.649816+0000 mgr.vm05.wnsmpp (mgr.14195) 112 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:38.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:38 vm09 bash[21099]: cluster 2026-03-10T07:19:36.649816+0000 mgr.vm05.wnsmpp (mgr.14195) 112 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:38.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:38 vm05 bash[17520]: cluster 2026-03-10T07:19:36.649816+0000 mgr.vm05.wnsmpp (mgr.14195) 112 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:38.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:38 vm05 bash[17520]: cluster 2026-03-10T07:19:36.649816+0000 mgr.vm05.wnsmpp (mgr.14195) 112 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:39.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:39 vm09 bash[21099]: cluster 2026-03-10T07:19:38.650114+0000 mgr.vm05.wnsmpp (mgr.14195) 113 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:39.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:39 vm09 bash[21099]: cluster 2026-03-10T07:19:38.650114+0000 mgr.vm05.wnsmpp (mgr.14195) 113 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:39.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:39 vm05 bash[17520]: cluster 2026-03-10T07:19:38.650114+0000 mgr.vm05.wnsmpp (mgr.14195) 113 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:39.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:39 vm05 bash[17520]: cluster 2026-03-10T07:19:38.650114+0000 mgr.vm05.wnsmpp (mgr.14195) 113 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:40.464 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:40.465 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:40.467 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:40.469 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:40.473 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:40.473 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:40.473 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:40.474 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:41.348 INFO:teuthology.orchestra.run.vm05.stdout:90194313226 2026-03-10T07:19:41.349 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd last-stat-seq osd.4 2026-03-10T07:19:41.349 INFO:teuthology.orchestra.run.vm05.stdout:85899345930 2026-03-10T07:19:41.349 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd last-stat-seq osd.7 2026-03-10T07:19:41.393 INFO:teuthology.orchestra.run.vm05.stdout:81604378634 2026-03-10T07:19:41.393 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd last-stat-seq osd.5 2026-03-10T07:19:41.403 INFO:teuthology.orchestra.run.vm05.stdout:85899345930 2026-03-10T07:19:41.403 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd last-stat-seq osd.3 2026-03-10T07:19:41.415 INFO:teuthology.orchestra.run.vm05.stdout:77309411338 2026-03-10T07:19:41.416 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd last-stat-seq osd.1 2026-03-10T07:19:41.433 INFO:teuthology.orchestra.run.vm05.stdout:85899345930 2026-03-10T07:19:41.433 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd last-stat-seq osd.2 2026-03-10T07:19:41.438 INFO:teuthology.orchestra.run.vm05.stdout:90194313226 2026-03-10T07:19:41.438 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd last-stat-seq osd.6 2026-03-10T07:19:41.447 INFO:teuthology.orchestra.run.vm05.stdout:77309411338 2026-03-10T07:19:41.447 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph osd last-stat-seq osd.0 2026-03-10T07:19:41.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:41 vm05 bash[17520]: cluster 2026-03-10T07:19:40.650398+0000 mgr.vm05.wnsmpp (mgr.14195) 114 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:41.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:41 vm05 bash[17520]: cluster 2026-03-10T07:19:40.650398+0000 mgr.vm05.wnsmpp (mgr.14195) 114 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:42.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:41 vm09 bash[21099]: cluster 2026-03-10T07:19:40.650398+0000 mgr.vm05.wnsmpp (mgr.14195) 114 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:42.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:41 vm09 bash[21099]: cluster 2026-03-10T07:19:40.650398+0000 mgr.vm05.wnsmpp (mgr.14195) 114 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:43.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:42 vm09 bash[21099]: audit 2026-03-10T07:19:42.674291+0000 mon.vm05 (mon.0) 638 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:43.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:42 vm09 bash[21099]: audit 2026-03-10T07:19:42.674291+0000 mon.vm05 (mon.0) 638 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:43.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:42 vm05 bash[17520]: audit 2026-03-10T07:19:42.674291+0000 mon.vm05 (mon.0) 638 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:43.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:42 vm05 bash[17520]: audit 2026-03-10T07:19:42.674291+0000 mon.vm05 (mon.0) 638 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:43.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:43 vm05 bash[17520]: cluster 2026-03-10T07:19:42.651616+0000 mgr.vm05.wnsmpp (mgr.14195) 115 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:43.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:43 vm05 bash[17520]: cluster 2026-03-10T07:19:42.651616+0000 mgr.vm05.wnsmpp (mgr.14195) 115 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:44.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:43 vm09 bash[21099]: cluster 2026-03-10T07:19:42.651616+0000 mgr.vm05.wnsmpp (mgr.14195) 115 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:44.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:43 vm09 bash[21099]: cluster 2026-03-10T07:19:42.651616+0000 mgr.vm05.wnsmpp (mgr.14195) 115 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:45.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:45 vm05 bash[17520]: cluster 2026-03-10T07:19:44.651907+0000 mgr.vm05.wnsmpp (mgr.14195) 116 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:45.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:45 vm05 bash[17520]: cluster 2026-03-10T07:19:44.651907+0000 mgr.vm05.wnsmpp (mgr.14195) 116 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:46.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:45 vm09 bash[21099]: cluster 2026-03-10T07:19:44.651907+0000 mgr.vm05.wnsmpp (mgr.14195) 116 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:46.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:45 vm09 bash[21099]: cluster 2026-03-10T07:19:44.651907+0000 mgr.vm05.wnsmpp (mgr.14195) 116 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:46.364 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:46.365 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:46.367 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:46.367 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:46.368 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:46.369 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:46.370 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:46.380 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:47.236 INFO:teuthology.orchestra.run.vm05.stdout:77309411339 2026-03-10T07:19:47.284 INFO:teuthology.orchestra.run.vm05.stdout:85899345931 2026-03-10T07:19:47.328 INFO:teuthology.orchestra.run.vm05.stdout:77309411339 2026-03-10T07:19:47.406 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411338 got 77309411339 for osd.0 2026-03-10T07:19:47.407 DEBUG:teuthology.parallel:result is None 2026-03-10T07:19:47.435 INFO:teuthology.orchestra.run.vm05.stdout:85899345931 2026-03-10T07:19:47.441 INFO:teuthology.orchestra.run.vm05.stdout:81604378635 2026-03-10T07:19:47.493 INFO:tasks.cephadm.ceph_manager.ceph:need seq 85899345930 got 85899345931 for osd.2 2026-03-10T07:19:47.493 DEBUG:teuthology.parallel:result is None 2026-03-10T07:19:47.495 INFO:teuthology.orchestra.run.vm05.stdout:85899345931 2026-03-10T07:19:47.545 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411338 got 77309411339 for osd.1 2026-03-10T07:19:47.545 DEBUG:teuthology.parallel:result is None 2026-03-10T07:19:47.550 INFO:teuthology.orchestra.run.vm05.stdout:90194313227 2026-03-10T07:19:47.552 INFO:teuthology.orchestra.run.vm05.stdout:90194313227 2026-03-10T07:19:47.614 INFO:tasks.cephadm.ceph_manager.ceph:need seq 85899345930 got 85899345931 for osd.7 2026-03-10T07:19:47.614 DEBUG:teuthology.parallel:result is None 2026-03-10T07:19:47.657 INFO:tasks.cephadm.ceph_manager.ceph:need seq 81604378634 got 81604378635 for osd.5 2026-03-10T07:19:47.657 DEBUG:teuthology.parallel:result is None 2026-03-10T07:19:47.696 INFO:tasks.cephadm.ceph_manager.ceph:need seq 90194313226 got 90194313227 for osd.6 2026-03-10T07:19:47.696 DEBUG:teuthology.parallel:result is None 2026-03-10T07:19:47.700 INFO:tasks.cephadm.ceph_manager.ceph:need seq 85899345930 got 85899345931 for osd.3 2026-03-10T07:19:47.700 DEBUG:teuthology.parallel:result is None 2026-03-10T07:19:47.705 INFO:tasks.cephadm.ceph_manager.ceph:need seq 90194313226 got 90194313227 for osd.4 2026-03-10T07:19:47.705 DEBUG:teuthology.parallel:result is None 2026-03-10T07:19:47.705 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T07:19:47.705 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph pg dump --format=json 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: cluster 2026-03-10T07:19:46.652220+0000 mgr.vm05.wnsmpp (mgr.14195) 117 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: cluster 2026-03-10T07:19:46.652220+0000 mgr.vm05.wnsmpp (mgr.14195) 117 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.231477+0000 mon.vm05 (mon.0) 639 : audit [DBG] from='client.? 192.168.123.105:0/2245728301' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.231477+0000 mon.vm05 (mon.0) 639 : audit [DBG] from='client.? 192.168.123.105:0/2245728301' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.284205+0000 mon.vm05 (mon.0) 640 : audit [DBG] from='client.? 192.168.123.105:0/896444208' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.284205+0000 mon.vm05 (mon.0) 640 : audit [DBG] from='client.? 192.168.123.105:0/896444208' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.323686+0000 mon.vm09 (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/100898778' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.323686+0000 mon.vm09 (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/100898778' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.430866+0000 mon.vm05 (mon.0) 641 : audit [DBG] from='client.? 192.168.123.105:0/306347763' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.430866+0000 mon.vm05 (mon.0) 641 : audit [DBG] from='client.? 192.168.123.105:0/306347763' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.431400+0000 mon.vm05 (mon.0) 642 : audit [DBG] from='client.? 192.168.123.105:0/623939263' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.431400+0000 mon.vm05 (mon.0) 642 : audit [DBG] from='client.? 192.168.123.105:0/623939263' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.494418+0000 mon.vm05 (mon.0) 643 : audit [DBG] from='client.? 192.168.123.105:0/2977090456' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.494418+0000 mon.vm05 (mon.0) 643 : audit [DBG] from='client.? 192.168.123.105:0/2977090456' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.544612+0000 mon.vm09 (mon.1) 22 : audit [DBG] from='client.? 192.168.123.105:0/3142292186' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.544612+0000 mon.vm09 (mon.1) 22 : audit [DBG] from='client.? 192.168.123.105:0/3142292186' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.550003+0000 mon.vm09 (mon.1) 23 : audit [DBG] from='client.? 192.168.123.105:0/3084350827' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T07:19:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:47 vm05 bash[17520]: audit 2026-03-10T07:19:47.550003+0000 mon.vm09 (mon.1) 23 : audit [DBG] from='client.? 192.168.123.105:0/3084350827' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: cluster 2026-03-10T07:19:46.652220+0000 mgr.vm05.wnsmpp (mgr.14195) 117 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: cluster 2026-03-10T07:19:46.652220+0000 mgr.vm05.wnsmpp (mgr.14195) 117 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.231477+0000 mon.vm05 (mon.0) 639 : audit [DBG] from='client.? 192.168.123.105:0/2245728301' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.231477+0000 mon.vm05 (mon.0) 639 : audit [DBG] from='client.? 192.168.123.105:0/2245728301' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.284205+0000 mon.vm05 (mon.0) 640 : audit [DBG] from='client.? 192.168.123.105:0/896444208' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.284205+0000 mon.vm05 (mon.0) 640 : audit [DBG] from='client.? 192.168.123.105:0/896444208' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.323686+0000 mon.vm09 (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/100898778' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.323686+0000 mon.vm09 (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/100898778' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.430866+0000 mon.vm05 (mon.0) 641 : audit [DBG] from='client.? 192.168.123.105:0/306347763' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.430866+0000 mon.vm05 (mon.0) 641 : audit [DBG] from='client.? 192.168.123.105:0/306347763' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.431400+0000 mon.vm05 (mon.0) 642 : audit [DBG] from='client.? 192.168.123.105:0/623939263' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.431400+0000 mon.vm05 (mon.0) 642 : audit [DBG] from='client.? 192.168.123.105:0/623939263' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.494418+0000 mon.vm05 (mon.0) 643 : audit [DBG] from='client.? 192.168.123.105:0/2977090456' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.494418+0000 mon.vm05 (mon.0) 643 : audit [DBG] from='client.? 192.168.123.105:0/2977090456' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.544612+0000 mon.vm09 (mon.1) 22 : audit [DBG] from='client.? 192.168.123.105:0/3142292186' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.544612+0000 mon.vm09 (mon.1) 22 : audit [DBG] from='client.? 192.168.123.105:0/3142292186' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.550003+0000 mon.vm09 (mon.1) 23 : audit [DBG] from='client.? 192.168.123.105:0/3084350827' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T07:19:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:47 vm09 bash[21099]: audit 2026-03-10T07:19:47.550003+0000 mon.vm09 (mon.1) 23 : audit [DBG] from='client.? 192.168.123.105:0/3084350827' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T07:19:50.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:49 vm09 bash[21099]: cluster 2026-03-10T07:19:48.652464+0000 mgr.vm05.wnsmpp (mgr.14195) 118 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:50.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:49 vm09 bash[21099]: cluster 2026-03-10T07:19:48.652464+0000 mgr.vm05.wnsmpp (mgr.14195) 118 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:50.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:49 vm05 bash[17520]: cluster 2026-03-10T07:19:48.652464+0000 mgr.vm05.wnsmpp (mgr.14195) 118 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:50.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:49 vm05 bash[17520]: cluster 2026-03-10T07:19:48.652464+0000 mgr.vm05.wnsmpp (mgr.14195) 118 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:52.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:51 vm09 bash[21099]: cluster 2026-03-10T07:19:50.652738+0000 mgr.vm05.wnsmpp (mgr.14195) 119 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:52.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:51 vm09 bash[21099]: cluster 2026-03-10T07:19:50.652738+0000 mgr.vm05.wnsmpp (mgr.14195) 119 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:52.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:51 vm05 bash[17520]: cluster 2026-03-10T07:19:50.652738+0000 mgr.vm05.wnsmpp (mgr.14195) 119 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:52.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:51 vm05 bash[17520]: cluster 2026-03-10T07:19:50.652738+0000 mgr.vm05.wnsmpp (mgr.14195) 119 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:52.403 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:52.669 INFO:teuthology.orchestra.run.vm05.stderr:dumped all 2026-03-10T07:19:52.669 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:19:52.729 INFO:teuthology.orchestra.run.vm05.stdout:{"pg_ready":true,"pg_map":{"version":76,"stamp":"2026-03-10T07:19:52.652896+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":218136,"kb_used_data":3020,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167521256,"statfs":{"total":171765137408,"available":171541766144,"internally_reserved":0,"allocated":3092480,"data_stored":1981640,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002624"},"pg_stats":[{"pgid":"1.0","version":"22'32","reported_seq":57,"reported_epoch":23,"state":"active+clean","last_fresh":"2026-03-10T07:19:01.398191+0000","last_change":"2026-03-10T07:19:00.714154+0000","last_active":"2026-03-10T07:19:01.398191+0000","last_peered":"2026-03-10T07:19:01.398191+0000","last_clean":"2026-03-10T07:19:01.398191+0000","last_became_active":"2026-03-10T07:19:00.713140+0000","last_became_peered":"2026-03-10T07:19:00.713140+0000","last_unstale":"2026-03-10T07:19:01.398191+0000","last_undegraded":"2026-03-10T07:19:01.398191+0000","last_fullsized":"2026-03-10T07:19:01.398191+0000","mapping_epoch":21,"log_start":"0'0","ondisk_log_start":"0'0","created":21,"last_epoch_clean":22,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:18:59.341469+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:18:59.341469+0000","last_clean_scrub_stamp":"2026-03-10T07:18:59.341469+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:35:20.901022+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,3],"acting":[7,0,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":6,"up_from":21,"seq":90194313228,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":208,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":212992,"data_stored":75475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":21,"seq":90194313228,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27052,"kb_used_data":208,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940372,"statfs":{"total":21470642176,"available":21442940928,"internally_reserved":0,"allocated":212992,"data_stored":75475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":7,"up_from":20,"seq":85899345932,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27632,"kb_used_data":660,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939792,"statfs":{"total":21470642176,"available":21442347008,"internally_reserved":0,"allocated":675840,"data_stored":534755,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":20,"seq":85899345932,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27624,"kb_used_data":660,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939800,"statfs":{"total":21470642176,"available":21442355200,"internally_reserved":0,"allocated":675840,"data_stored":534755,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":20,"seq":85899345932,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":208,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":212992,"data_stored":75475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":19,"seq":81604378636,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":208,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":212992,"data_stored":75475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":18,"seq":77309411341,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27052,"kb_used_data":208,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940372,"statfs":{"total":21470642176,"available":21442940928,"internally_reserved":0,"allocated":212992,"data_stored":75475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":18,"seq":77309411341,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27632,"kb_used_data":660,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939792,"statfs":{"total":21470642176,"available":21442347008,"internally_reserved":0,"allocated":675840,"data_stored":534755,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T07:19:52.729 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph pg dump --format=json 2026-03-10T07:19:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:53 vm09 bash[21099]: cluster 2026-03-10T07:19:52.653037+0000 mgr.vm05.wnsmpp (mgr.14195) 120 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:53 vm09 bash[21099]: cluster 2026-03-10T07:19:52.653037+0000 mgr.vm05.wnsmpp (mgr.14195) 120 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:53 vm09 bash[21099]: audit 2026-03-10T07:19:52.670229+0000 mgr.vm05.wnsmpp (mgr.14195) 121 : audit [DBG] from='client.14434 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:19:54.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:53 vm09 bash[21099]: audit 2026-03-10T07:19:52.670229+0000 mgr.vm05.wnsmpp (mgr.14195) 121 : audit [DBG] from='client.14434 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:19:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:53 vm05 bash[17520]: cluster 2026-03-10T07:19:52.653037+0000 mgr.vm05.wnsmpp (mgr.14195) 120 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:53 vm05 bash[17520]: cluster 2026-03-10T07:19:52.653037+0000 mgr.vm05.wnsmpp (mgr.14195) 120 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:53 vm05 bash[17520]: audit 2026-03-10T07:19:52.670229+0000 mgr.vm05.wnsmpp (mgr.14195) 121 : audit [DBG] from='client.14434 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:19:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:53 vm05 bash[17520]: audit 2026-03-10T07:19:52.670229+0000 mgr.vm05.wnsmpp (mgr.14195) 121 : audit [DBG] from='client.14434 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:19:56.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:55 vm09 bash[21099]: cluster 2026-03-10T07:19:54.653316+0000 mgr.vm05.wnsmpp (mgr.14195) 122 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:55 vm09 bash[21099]: cluster 2026-03-10T07:19:54.653316+0000 mgr.vm05.wnsmpp (mgr.14195) 122 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:56.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:55 vm05 bash[17520]: cluster 2026-03-10T07:19:54.653316+0000 mgr.vm05.wnsmpp (mgr.14195) 122 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:56.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:55 vm05 bash[17520]: cluster 2026-03-10T07:19:54.653316+0000 mgr.vm05.wnsmpp (mgr.14195) 122 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:56.442 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:19:56.716 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:19:56.716 INFO:teuthology.orchestra.run.vm05.stderr:dumped all 2026-03-10T07:19:56.780 INFO:teuthology.orchestra.run.vm05.stdout:{"pg_ready":true,"pg_map":{"version":78,"stamp":"2026-03-10T07:19:56.653465+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":218136,"kb_used_data":3020,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167521256,"statfs":{"total":171765137408,"available":171541766144,"internally_reserved":0,"allocated":3092480,"data_stored":1981640,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001699"},"pg_stats":[{"pgid":"1.0","version":"22'32","reported_seq":57,"reported_epoch":23,"state":"active+clean","last_fresh":"2026-03-10T07:19:01.398191+0000","last_change":"2026-03-10T07:19:00.714154+0000","last_active":"2026-03-10T07:19:01.398191+0000","last_peered":"2026-03-10T07:19:01.398191+0000","last_clean":"2026-03-10T07:19:01.398191+0000","last_became_active":"2026-03-10T07:19:00.713140+0000","last_became_peered":"2026-03-10T07:19:00.713140+0000","last_unstale":"2026-03-10T07:19:01.398191+0000","last_undegraded":"2026-03-10T07:19:01.398191+0000","last_fullsized":"2026-03-10T07:19:01.398191+0000","mapping_epoch":21,"log_start":"0'0","ondisk_log_start":"0'0","created":21,"last_epoch_clean":22,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T07:18:59.341469+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T07:18:59.341469+0000","last_clean_scrub_stamp":"2026-03-10T07:18:59.341469+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T07:35:20.901022+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,3],"acting":[7,0,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":6,"up_from":21,"seq":90194313229,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":208,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":212992,"data_stored":75475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":21,"seq":90194313229,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27052,"kb_used_data":208,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940372,"statfs":{"total":21470642176,"available":21442940928,"internally_reserved":0,"allocated":212992,"data_stored":75475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":7,"up_from":20,"seq":85899345933,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27632,"kb_used_data":660,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939792,"statfs":{"total":21470642176,"available":21442347008,"internally_reserved":0,"allocated":675840,"data_stored":534755,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":20,"seq":85899345933,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27624,"kb_used_data":660,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939800,"statfs":{"total":21470642176,"available":21442355200,"internally_reserved":0,"allocated":675840,"data_stored":534755,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":20,"seq":85899345933,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":208,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":212992,"data_stored":75475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":19,"seq":81604378637,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":208,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":212992,"data_stored":75475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":18,"seq":77309411341,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27052,"kb_used_data":208,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940372,"statfs":{"total":21470642176,"available":21442940928,"internally_reserved":0,"allocated":212992,"data_stored":75475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":18,"seq":77309411341,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27632,"kb_used_data":660,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939792,"statfs":{"total":21470642176,"available":21442347008,"internally_reserved":0,"allocated":675840,"data_stored":534755,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T07:19:56.780 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T07:19:56.780 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T07:19:56.780 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T07:19:56.780 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph health --format=json 2026-03-10T07:19:58.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:57 vm09 bash[21099]: cluster 2026-03-10T07:19:56.653579+0000 mgr.vm05.wnsmpp (mgr.14195) 123 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:58.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:57 vm09 bash[21099]: cluster 2026-03-10T07:19:56.653579+0000 mgr.vm05.wnsmpp (mgr.14195) 123 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:58.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:57 vm09 bash[21099]: audit 2026-03-10T07:19:56.717686+0000 mgr.vm05.wnsmpp (mgr.14195) 124 : audit [DBG] from='client.14438 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:19:58.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:57 vm09 bash[21099]: audit 2026-03-10T07:19:56.717686+0000 mgr.vm05.wnsmpp (mgr.14195) 124 : audit [DBG] from='client.14438 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:19:58.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:57 vm09 bash[21099]: audit 2026-03-10T07:19:57.674326+0000 mon.vm05 (mon.0) 644 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:58.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:57 vm09 bash[21099]: audit 2026-03-10T07:19:57.674326+0000 mon.vm05 (mon.0) 644 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:58.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:57 vm05 bash[17520]: cluster 2026-03-10T07:19:56.653579+0000 mgr.vm05.wnsmpp (mgr.14195) 123 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:58.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:57 vm05 bash[17520]: cluster 2026-03-10T07:19:56.653579+0000 mgr.vm05.wnsmpp (mgr.14195) 123 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:19:58.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:57 vm05 bash[17520]: audit 2026-03-10T07:19:56.717686+0000 mgr.vm05.wnsmpp (mgr.14195) 124 : audit [DBG] from='client.14438 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:19:58.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:57 vm05 bash[17520]: audit 2026-03-10T07:19:56.717686+0000 mgr.vm05.wnsmpp (mgr.14195) 124 : audit [DBG] from='client.14438 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:19:58.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:57 vm05 bash[17520]: audit 2026-03-10T07:19:57.674326+0000 mon.vm05 (mon.0) 644 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:19:58.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:57 vm05 bash[17520]: audit 2026-03-10T07:19:57.674326+0000 mon.vm05 (mon.0) 644 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:00.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:59 vm09 bash[21099]: cluster 2026-03-10T07:19:58.653882+0000 mgr.vm05.wnsmpp (mgr.14195) 125 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:00.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:19:59 vm09 bash[21099]: cluster 2026-03-10T07:19:58.653882+0000 mgr.vm05.wnsmpp (mgr.14195) 125 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:00.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:59 vm05 bash[17520]: cluster 2026-03-10T07:19:58.653882+0000 mgr.vm05.wnsmpp (mgr.14195) 125 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:00.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:19:59 vm05 bash[17520]: cluster 2026-03-10T07:19:58.653882+0000 mgr.vm05.wnsmpp (mgr.14195) 125 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:00.476 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:20:00.763 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:20:00.764 INFO:teuthology.orchestra.run.vm05.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T07:20:00.820 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T07:20:00.820 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T07:20:00.820 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T07:20:00.822 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-10T07:20:00.823 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph orch status' 2026-03-10T07:20:01.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:00 vm09 bash[21099]: cluster 2026-03-10T07:20:00.000097+0000 mon.vm05 (mon.0) 645 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:01.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:00 vm09 bash[21099]: cluster 2026-03-10T07:20:00.000097+0000 mon.vm05 (mon.0) 645 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:01.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:00 vm09 bash[21099]: audit 2026-03-10T07:20:00.764754+0000 mon.vm05 (mon.0) 646 : audit [DBG] from='client.? 192.168.123.105:0/2218702349' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T07:20:01.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:00 vm09 bash[21099]: audit 2026-03-10T07:20:00.764754+0000 mon.vm05 (mon.0) 646 : audit [DBG] from='client.? 192.168.123.105:0/2218702349' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T07:20:01.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:00 vm05 bash[17520]: cluster 2026-03-10T07:20:00.000097+0000 mon.vm05 (mon.0) 645 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:01.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:00 vm05 bash[17520]: cluster 2026-03-10T07:20:00.000097+0000 mon.vm05 (mon.0) 645 : cluster [INF] overall HEALTH_OK 2026-03-10T07:20:01.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:00 vm05 bash[17520]: audit 2026-03-10T07:20:00.764754+0000 mon.vm05 (mon.0) 646 : audit [DBG] from='client.? 192.168.123.105:0/2218702349' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T07:20:01.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:00 vm05 bash[17520]: audit 2026-03-10T07:20:00.764754+0000 mon.vm05 (mon.0) 646 : audit [DBG] from='client.? 192.168.123.105:0/2218702349' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T07:20:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:01 vm09 bash[21099]: cluster 2026-03-10T07:20:00.654162+0000 mgr.vm05.wnsmpp (mgr.14195) 126 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:02.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:01 vm09 bash[21099]: cluster 2026-03-10T07:20:00.654162+0000 mgr.vm05.wnsmpp (mgr.14195) 126 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:02.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:01 vm05 bash[17520]: cluster 2026-03-10T07:20:00.654162+0000 mgr.vm05.wnsmpp (mgr.14195) 126 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:02.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:01 vm05 bash[17520]: cluster 2026-03-10T07:20:00.654162+0000 mgr.vm05.wnsmpp (mgr.14195) 126 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:03 vm09 bash[21099]: cluster 2026-03-10T07:20:02.654381+0000 mgr.vm05.wnsmpp (mgr.14195) 127 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:04.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:03 vm09 bash[21099]: cluster 2026-03-10T07:20:02.654381+0000 mgr.vm05.wnsmpp (mgr.14195) 127 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:03 vm05 bash[17520]: cluster 2026-03-10T07:20:02.654381+0000 mgr.vm05.wnsmpp (mgr.14195) 127 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:04.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:03 vm05 bash[17520]: cluster 2026-03-10T07:20:02.654381+0000 mgr.vm05.wnsmpp (mgr.14195) 127 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:04.508 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:20:04.769 INFO:teuthology.orchestra.run.vm05.stdout:Backend: cephadm 2026-03-10T07:20:04.769 INFO:teuthology.orchestra.run.vm05.stdout:Available: Yes 2026-03-10T07:20:04.769 INFO:teuthology.orchestra.run.vm05.stdout:Paused: No 2026-03-10T07:20:04.823 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph orch ps' 2026-03-10T07:20:06.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:05 vm09 bash[21099]: cluster 2026-03-10T07:20:04.654672+0000 mgr.vm05.wnsmpp (mgr.14195) 128 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:06.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:05 vm09 bash[21099]: cluster 2026-03-10T07:20:04.654672+0000 mgr.vm05.wnsmpp (mgr.14195) 128 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:06.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:05 vm09 bash[21099]: audit 2026-03-10T07:20:04.770176+0000 mgr.vm05.wnsmpp (mgr.14195) 129 : audit [DBG] from='client.14446 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:06.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:05 vm09 bash[21099]: audit 2026-03-10T07:20:04.770176+0000 mgr.vm05.wnsmpp (mgr.14195) 129 : audit [DBG] from='client.14446 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:06.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:05 vm05 bash[17520]: cluster 2026-03-10T07:20:04.654672+0000 mgr.vm05.wnsmpp (mgr.14195) 128 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:06.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:05 vm05 bash[17520]: cluster 2026-03-10T07:20:04.654672+0000 mgr.vm05.wnsmpp (mgr.14195) 128 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:06.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:05 vm05 bash[17520]: audit 2026-03-10T07:20:04.770176+0000 mgr.vm05.wnsmpp (mgr.14195) 129 : audit [DBG] from='client.14446 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:06.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:05 vm05 bash[17520]: audit 2026-03-10T07:20:04.770176+0000 mgr.vm05.wnsmpp (mgr.14195) 129 : audit [DBG] from='client.14446 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:08.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:07 vm09 bash[21099]: cluster 2026-03-10T07:20:06.654944+0000 mgr.vm05.wnsmpp (mgr.14195) 130 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:08.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:07 vm09 bash[21099]: cluster 2026-03-10T07:20:06.654944+0000 mgr.vm05.wnsmpp (mgr.14195) 130 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:08.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:07 vm05 bash[17520]: cluster 2026-03-10T07:20:06.654944+0000 mgr.vm05.wnsmpp (mgr.14195) 130 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:08.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:07 vm05 bash[17520]: cluster 2026-03-10T07:20:06.654944+0000 mgr.vm05.wnsmpp (mgr.14195) 130 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:08.539 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (2m) 64s ago 2m 17.5M - 0.25.0 c8568f914cd2 83564f78fb3d 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 *:9926 running (2m) 64s ago 2m 8439k - 19.2.3-678-ge911bdeb 654f31e6858e b1c7ad206111 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 *:9926 running (2m) 64s ago 2m 6395k - 19.2.3-678-ge911bdeb 654f31e6858e 6d763e025bef 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (2m) 64s ago 2m 7296k - 19.2.3-678-ge911bdeb 654f31e6858e eee6421fab37 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (2m) 64s ago 2m 7308k - 19.2.3-678-ge911bdeb 654f31e6858e 93d45dc69cc5 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (2m) 64s ago 2m 63.8M - 10.4.0 c8b91775d855 1d4334f91f97 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.wnsmpp vm05 *:9283,8765,8443 running (3m) 64s ago 3m 523M - 19.2.3-678-ge911bdeb 654f31e6858e 7e456e14e1b3 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.rfdvwa vm09 *:8443,9283,8765 running (2m) 64s ago 2m 464M - 19.2.3-678-ge911bdeb 654f31e6858e 77bbabd48a81 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (3m) 64s ago 3m 44.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 9a36265d35f0 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (2m) 64s ago 2m 38.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e a99639a157b8 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (2m) 64s ago 2m 7512k - 1.7.0 72c9c2088986 4f78d5630475 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (2m) 64s ago 2m 7312k - 1.7.0 72c9c2088986 a137075cccbf 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm09 running (80s) 64s ago 82s 49.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 265c2a142782 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (79s) 64s ago 81s 50.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c17e07c89163 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm09 running (78s) 64s ago 80s 47.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e ef7f7be900e3 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (78s) 64s ago 80s 52.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 89e8deae7ef3 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (77s) 64s ago 79s 48.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 75a9e4910012 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm05 running (76s) 64s ago 78s 27.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e febb1912b095 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm09 running (76s) 64s ago 77s 49.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e a323290ae613 2026-03-10T07:20:08.796 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm05 running (75s) 64s ago 77s 36.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 527bf6f7a638 2026-03-10T07:20:08.797 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (2m) 64s ago 2m 31.3M - 2.51.0 1d3b7f56885b e918b82837c9 2026-03-10T07:20:08.854 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph orch ls' 2026-03-10T07:20:10.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:09 vm09 bash[21099]: cluster 2026-03-10T07:20:08.655152+0000 mgr.vm05.wnsmpp (mgr.14195) 131 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:10.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:09 vm09 bash[21099]: cluster 2026-03-10T07:20:08.655152+0000 mgr.vm05.wnsmpp (mgr.14195) 131 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:10.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:09 vm09 bash[21099]: audit 2026-03-10T07:20:08.793031+0000 mgr.vm05.wnsmpp (mgr.14195) 132 : audit [DBG] from='client.14450 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:10.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:09 vm09 bash[21099]: audit 2026-03-10T07:20:08.793031+0000 mgr.vm05.wnsmpp (mgr.14195) 132 : audit [DBG] from='client.14450 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:10.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:09 vm05 bash[17520]: cluster 2026-03-10T07:20:08.655152+0000 mgr.vm05.wnsmpp (mgr.14195) 131 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:10.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:09 vm05 bash[17520]: cluster 2026-03-10T07:20:08.655152+0000 mgr.vm05.wnsmpp (mgr.14195) 131 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:10.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:09 vm05 bash[17520]: audit 2026-03-10T07:20:08.793031+0000 mgr.vm05.wnsmpp (mgr.14195) 132 : audit [DBG] from='client.14450 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:10.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:09 vm05 bash[17520]: audit 2026-03-10T07:20:08.793031+0000 mgr.vm05.wnsmpp (mgr.14195) 132 : audit [DBG] from='client.14450 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:11.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:10 vm09 bash[21099]: audit 2026-03-10T07:20:10.308730+0000 mon.vm05 (mon.0) 647 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:11.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:10 vm09 bash[21099]: audit 2026-03-10T07:20:10.308730+0000 mon.vm05 (mon.0) 647 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:11.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:10 vm05 bash[17520]: audit 2026-03-10T07:20:10.308730+0000 mon.vm05 (mon.0) 647 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:11.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:10 vm05 bash[17520]: audit 2026-03-10T07:20:10.308730+0000 mon.vm05 (mon.0) 647 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:12.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:11 vm09 bash[21099]: cluster 2026-03-10T07:20:10.655453+0000 mgr.vm05.wnsmpp (mgr.14195) 133 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:12.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:11 vm09 bash[21099]: cluster 2026-03-10T07:20:10.655453+0000 mgr.vm05.wnsmpp (mgr.14195) 133 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:12.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:11 vm05 bash[17520]: cluster 2026-03-10T07:20:10.655453+0000 mgr.vm05.wnsmpp (mgr.14195) 133 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:12.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:11 vm05 bash[17520]: cluster 2026-03-10T07:20:10.655453+0000 mgr.vm05.wnsmpp (mgr.14195) 133 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:12.586 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:20:12.863 INFO:teuthology.orchestra.run.vm05.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T07:20:12.863 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager ?:9093,9094 1/1 68s ago 3m count:1 2026-03-10T07:20:12.863 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter ?:9926 2/2 68s ago 3m * 2026-03-10T07:20:12.863 INFO:teuthology.orchestra.run.vm05.stdout:crash 2/2 68s ago 3m * 2026-03-10T07:20:12.863 INFO:teuthology.orchestra.run.vm05.stdout:grafana ?:3000 1/1 68s ago 3m count:1 2026-03-10T07:20:12.863 INFO:teuthology.orchestra.run.vm05.stdout:mgr 2/2 68s ago 3m count:2 2026-03-10T07:20:12.863 INFO:teuthology.orchestra.run.vm05.stdout:mon 2/2 68s ago 2m vm05:192.168.123.105=vm05;vm09:192.168.123.109=vm09;count:2 2026-03-10T07:20:12.863 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter ?:9100 2/2 68s ago 3m * 2026-03-10T07:20:12.864 INFO:teuthology.orchestra.run.vm05.stdout:osd.all-available-devices 8 68s ago 2m * 2026-03-10T07:20:12.864 INFO:teuthology.orchestra.run.vm05.stdout:prometheus ?:9095 1/1 68s ago 3m count:1 2026-03-10T07:20:12.899 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:12 vm05 bash[17520]: audit 2026-03-10T07:20:12.674652+0000 mon.vm05 (mon.0) 648 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:12.899 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:12 vm05 bash[17520]: audit 2026-03-10T07:20:12.674652+0000 mon.vm05 (mon.0) 648 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:12.929 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph orch host ls' 2026-03-10T07:20:13.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:12 vm09 bash[21099]: audit 2026-03-10T07:20:12.674652+0000 mon.vm05 (mon.0) 648 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:13.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:12 vm09 bash[21099]: audit 2026-03-10T07:20:12.674652+0000 mon.vm05 (mon.0) 648 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:14.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:13 vm09 bash[21099]: cluster 2026-03-10T07:20:12.655714+0000 mgr.vm05.wnsmpp (mgr.14195) 134 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:14.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:13 vm09 bash[21099]: cluster 2026-03-10T07:20:12.655714+0000 mgr.vm05.wnsmpp (mgr.14195) 134 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:14.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:13 vm09 bash[21099]: audit 2026-03-10T07:20:12.862660+0000 mgr.vm05.wnsmpp (mgr.14195) 135 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:14.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:13 vm09 bash[21099]: audit 2026-03-10T07:20:12.862660+0000 mgr.vm05.wnsmpp (mgr.14195) 135 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:14.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:13 vm05 bash[17520]: cluster 2026-03-10T07:20:12.655714+0000 mgr.vm05.wnsmpp (mgr.14195) 134 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:14.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:13 vm05 bash[17520]: cluster 2026-03-10T07:20:12.655714+0000 mgr.vm05.wnsmpp (mgr.14195) 134 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:14.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:13 vm05 bash[17520]: audit 2026-03-10T07:20:12.862660+0000 mgr.vm05.wnsmpp (mgr.14195) 135 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:14.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:13 vm05 bash[17520]: audit 2026-03-10T07:20:12.862660+0000 mgr.vm05.wnsmpp (mgr.14195) 135 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:16.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: cluster 2026-03-10T07:20:14.655974+0000 mgr.vm05.wnsmpp (mgr.14195) 136 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:16.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: cluster 2026-03-10T07:20:14.655974+0000 mgr.vm05.wnsmpp (mgr.14195) 136 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.186701+0000 mon.vm05 (mon.0) 649 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.186701+0000 mon.vm05 (mon.0) 649 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.193934+0000 mon.vm05 (mon.0) 650 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.193934+0000 mon.vm05 (mon.0) 650 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.368153+0000 mon.vm05 (mon.0) 651 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.368153+0000 mon.vm05 (mon.0) 651 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.372908+0000 mon.vm05 (mon.0) 652 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.372908+0000 mon.vm05 (mon.0) 652 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.672581+0000 mon.vm05 (mon.0) 653 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.672581+0000 mon.vm05 (mon.0) 653 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.673198+0000 mon.vm05 (mon.0) 654 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.673198+0000 mon.vm05 (mon.0) 654 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.684460+0000 mon.vm05 (mon.0) 655 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.684460+0000 mon.vm05 (mon.0) 655 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.686740+0000 mon.vm05 (mon.0) 656 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:16.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:16 vm05 bash[17520]: audit 2026-03-10T07:20:15.686740+0000 mon.vm05 (mon.0) 656 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:16.622 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: cluster 2026-03-10T07:20:14.655974+0000 mgr.vm05.wnsmpp (mgr.14195) 136 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: cluster 2026-03-10T07:20:14.655974+0000 mgr.vm05.wnsmpp (mgr.14195) 136 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.186701+0000 mon.vm05 (mon.0) 649 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.186701+0000 mon.vm05 (mon.0) 649 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.193934+0000 mon.vm05 (mon.0) 650 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.193934+0000 mon.vm05 (mon.0) 650 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.368153+0000 mon.vm05 (mon.0) 651 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.368153+0000 mon.vm05 (mon.0) 651 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.372908+0000 mon.vm05 (mon.0) 652 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.372908+0000 mon.vm05 (mon.0) 652 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.672581+0000 mon.vm05 (mon.0) 653 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.672581+0000 mon.vm05 (mon.0) 653 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.673198+0000 mon.vm05 (mon.0) 654 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.673198+0000 mon.vm05 (mon.0) 654 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.684460+0000 mon.vm05 (mon.0) 655 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.684460+0000 mon.vm05 (mon.0) 655 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.686740+0000 mon.vm05 (mon.0) 656 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:16.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:16 vm09 bash[21099]: audit 2026-03-10T07:20:15.686740+0000 mon.vm05 (mon.0) 656 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:16.888 INFO:teuthology.orchestra.run.vm05.stdout:HOST ADDR LABELS STATUS 2026-03-10T07:20:16.888 INFO:teuthology.orchestra.run.vm05.stdout:vm05 192.168.123.105 2026-03-10T07:20:16.888 INFO:teuthology.orchestra.run.vm05.stdout:vm09 192.168.123.109 2026-03-10T07:20:16.888 INFO:teuthology.orchestra.run.vm05.stdout:2 hosts in cluster 2026-03-10T07:20:16.944 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph orch device ls' 2026-03-10T07:20:18.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:18 vm05 bash[17520]: cluster 2026-03-10T07:20:16.656213+0000 mgr.vm05.wnsmpp (mgr.14195) 137 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:18.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:18 vm05 bash[17520]: cluster 2026-03-10T07:20:16.656213+0000 mgr.vm05.wnsmpp (mgr.14195) 137 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:18.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:18 vm05 bash[17520]: audit 2026-03-10T07:20:16.889439+0000 mgr.vm05.wnsmpp (mgr.14195) 138 : audit [DBG] from='client.14458 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:18.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:18 vm05 bash[17520]: audit 2026-03-10T07:20:16.889439+0000 mgr.vm05.wnsmpp (mgr.14195) 138 : audit [DBG] from='client.14458 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:18.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:18 vm09 bash[21099]: cluster 2026-03-10T07:20:16.656213+0000 mgr.vm05.wnsmpp (mgr.14195) 137 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:18.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:18 vm09 bash[21099]: cluster 2026-03-10T07:20:16.656213+0000 mgr.vm05.wnsmpp (mgr.14195) 137 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:18.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:18 vm09 bash[21099]: audit 2026-03-10T07:20:16.889439+0000 mgr.vm05.wnsmpp (mgr.14195) 138 : audit [DBG] from='client.14458 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:18.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:18 vm09 bash[21099]: audit 2026-03-10T07:20:16.889439+0000 mgr.vm05.wnsmpp (mgr.14195) 138 : audit [DBG] from='client.14458 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:20.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:20 vm05 bash[17520]: cluster 2026-03-10T07:20:18.656452+0000 mgr.vm05.wnsmpp (mgr.14195) 139 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:20.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:20 vm05 bash[17520]: cluster 2026-03-10T07:20:18.656452+0000 mgr.vm05.wnsmpp (mgr.14195) 139 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:20.654 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:20:20.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:20 vm09 bash[21099]: cluster 2026-03-10T07:20:18.656452+0000 mgr.vm05.wnsmpp (mgr.14195) 139 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:20.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:20 vm09 bash[21099]: cluster 2026-03-10T07:20:18.656452+0000 mgr.vm05.wnsmpp (mgr.14195) 139 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:20.915 INFO:teuthology.orchestra.run.vm05.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T07:20:20.915 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 71s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T07:20:20.915 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vdb hdd DWNBRSTVMM05001 20.0G No 71s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:20.915 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vdc hdd DWNBRSTVMM05002 20.0G No 71s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:20.915 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vdd hdd DWNBRSTVMM05003 20.0G No 71s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:20.915 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vde hdd DWNBRSTVMM05004 20.0G No 71s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:20.915 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 70s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T07:20:20.915 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vdb hdd DWNBRSTVMM09001 20.0G No 70s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:20.915 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vdc hdd DWNBRSTVMM09002 20.0G No 70s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:20.915 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vdd hdd DWNBRSTVMM09003 20.0G No 70s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:20.915 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vde hdd DWNBRSTVMM09004 20.0G No 70s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:20.970 INFO:teuthology.run_tasks:Running task vip... 2026-03-10T07:20:20.973 INFO:tasks.vip:Allocating static IPs for each host... 2026-03-10T07:20:20.973 INFO:tasks.vip:peername 192.168.123.105 2026-03-10T07:20:20.974 INFO:tasks.vip:192.168.123.105 in 192.168.123.0/24, pos 104 2026-03-10T07:20:20.974 INFO:tasks.vip:vm05.local static 12.12.0.105, vnet 12.12.0.0/22 2026-03-10T07:20:20.974 INFO:tasks.vip:VIPs are [IPv4Address('12.12.1.105')] 2026-03-10T07:20:20.974 DEBUG:teuthology.orchestra.run.vm05:> sudo ip route ls 2026-03-10T07:20:20.982 INFO:teuthology.orchestra.run.vm05.stdout:default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.105 metric 100 2026-03-10T07:20:20.982 INFO:teuthology.orchestra.run.vm05.stdout:172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-10T07:20:20.982 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.105 metric 100 2026-03-10T07:20:20.982 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.105 metric 100 2026-03-10T07:20:20.983 INFO:tasks.vip:Configuring 12.12.0.105 on vm05.local iface ens3... 2026-03-10T07:20:20.983 DEBUG:teuthology.orchestra.run.vm05:> sudo ip addr add 12.12.0.105/22 dev ens3 2026-03-10T07:20:21.032 INFO:tasks.vip:peername 192.168.123.109 2026-03-10T07:20:21.033 INFO:tasks.vip:192.168.123.109 in 192.168.123.0/24, pos 108 2026-03-10T07:20:21.033 INFO:tasks.vip:vm09.local static 12.12.0.109, vnet 12.12.0.0/22 2026-03-10T07:20:21.033 DEBUG:teuthology.orchestra.run.vm09:> sudo ip route ls 2026-03-10T07:20:21.039 INFO:teuthology.orchestra.run.vm09.stdout:default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.109 metric 100 2026-03-10T07:20:21.039 INFO:teuthology.orchestra.run.vm09.stdout:172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-10T07:20:21.039 INFO:teuthology.orchestra.run.vm09.stdout:192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.109 metric 100 2026-03-10T07:20:21.039 INFO:teuthology.orchestra.run.vm09.stdout:192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.109 metric 100 2026-03-10T07:20:21.040 INFO:tasks.vip:Configuring 12.12.0.109 on vm09.local iface ens3... 2026-03-10T07:20:21.040 DEBUG:teuthology.orchestra.run.vm09:> sudo ip addr add 12.12.0.109/22 dev ens3 2026-03-10T07:20:21.088 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T07:20:21.090 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-10T07:20:21.090 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph orch device ls --refresh' 2026-03-10T07:20:22.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:22 vm05 bash[17520]: cluster 2026-03-10T07:20:20.656695+0000 mgr.vm05.wnsmpp (mgr.14195) 140 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:22.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:22 vm05 bash[17520]: cluster 2026-03-10T07:20:20.656695+0000 mgr.vm05.wnsmpp (mgr.14195) 140 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:22.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:22 vm05 bash[17520]: audit 2026-03-10T07:20:20.915023+0000 mgr.vm05.wnsmpp (mgr.14195) 141 : audit [DBG] from='client.14462 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:22.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:22 vm05 bash[17520]: audit 2026-03-10T07:20:20.915023+0000 mgr.vm05.wnsmpp (mgr.14195) 141 : audit [DBG] from='client.14462 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:22.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:22 vm09 bash[21099]: cluster 2026-03-10T07:20:20.656695+0000 mgr.vm05.wnsmpp (mgr.14195) 140 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:22.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:22 vm09 bash[21099]: cluster 2026-03-10T07:20:20.656695+0000 mgr.vm05.wnsmpp (mgr.14195) 140 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:22.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:22 vm09 bash[21099]: audit 2026-03-10T07:20:20.915023+0000 mgr.vm05.wnsmpp (mgr.14195) 141 : audit [DBG] from='client.14462 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:22.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:22 vm09 bash[21099]: audit 2026-03-10T07:20:20.915023+0000 mgr.vm05.wnsmpp (mgr.14195) 141 : audit [DBG] from='client.14462 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:24.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:24 vm09 bash[21099]: cluster 2026-03-10T07:20:22.656954+0000 mgr.vm05.wnsmpp (mgr.14195) 142 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:24.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:24 vm09 bash[21099]: cluster 2026-03-10T07:20:22.656954+0000 mgr.vm05.wnsmpp (mgr.14195) 142 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:24.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:24 vm05 bash[17520]: cluster 2026-03-10T07:20:22.656954+0000 mgr.vm05.wnsmpp (mgr.14195) 142 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:24.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:24 vm05 bash[17520]: cluster 2026-03-10T07:20:22.656954+0000 mgr.vm05.wnsmpp (mgr.14195) 142 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:25.747 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:20:26.006 INFO:teuthology.orchestra.run.vm05.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T07:20:26.006 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 76s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T07:20:26.006 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vdb hdd DWNBRSTVMM05001 20.0G No 76s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:26.006 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vdc hdd DWNBRSTVMM05002 20.0G No 76s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:26.006 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vdd hdd DWNBRSTVMM05003 20.0G No 76s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:26.006 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vde hdd DWNBRSTVMM05004 20.0G No 76s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:26.006 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 75s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T07:20:26.006 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vdb hdd DWNBRSTVMM09001 20.0G No 75s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:26.006 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vdc hdd DWNBRSTVMM09002 20.0G No 75s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:26.006 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vdd hdd DWNBRSTVMM09003 20.0G No 75s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:26.006 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vde hdd DWNBRSTVMM09004 20.0G No 75s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:20:26.098 INFO:teuthology.run_tasks:Running task vip.exec... 2026-03-10T07:20:26.100 INFO:tasks.vip:Running commands on role host.a host ubuntu@vm05.local 2026-03-10T07:20:26.101 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'systemctl stop nfs-server' 2026-03-10T07:20:26.108 INFO:teuthology.orchestra.run.vm05.stderr:+ systemctl stop nfs-server 2026-03-10T07:20:26.111 INFO:tasks.vip:Running commands on role host.b host ubuntu@vm09.local 2026-03-10T07:20:26.111 DEBUG:teuthology.orchestra.run.vm09:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'systemctl stop nfs-server' 2026-03-10T07:20:26.117 INFO:teuthology.orchestra.run.vm09.stderr:+ systemctl stop nfs-server 2026-03-10T07:20:26.120 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T07:20:26.122 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-10T07:20:26.122 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph fs volume create foofs' 2026-03-10T07:20:26.423 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:26 vm09 bash[21099]: cluster 2026-03-10T07:20:24.657204+0000 mgr.vm05.wnsmpp (mgr.14195) 143 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:26.423 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:26 vm09 bash[21099]: cluster 2026-03-10T07:20:24.657204+0000 mgr.vm05.wnsmpp (mgr.14195) 143 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:26.423 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:26 vm09 bash[21099]: audit 2026-03-10T07:20:26.007703+0000 mon.vm05 (mon.0) 657 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:26.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:26 vm09 bash[21099]: audit 2026-03-10T07:20:26.007703+0000 mon.vm05 (mon.0) 657 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:26.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:26 vm05 bash[17520]: cluster 2026-03-10T07:20:24.657204+0000 mgr.vm05.wnsmpp (mgr.14195) 143 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:26.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:26 vm05 bash[17520]: cluster 2026-03-10T07:20:24.657204+0000 mgr.vm05.wnsmpp (mgr.14195) 143 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:26.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:26 vm05 bash[17520]: audit 2026-03-10T07:20:26.007703+0000 mon.vm05 (mon.0) 657 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:26.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:26 vm05 bash[17520]: audit 2026-03-10T07:20:26.007703+0000 mon.vm05 (mon.0) 657 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:27.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:27 vm09 bash[21099]: audit 2026-03-10T07:20:26.006101+0000 mgr.vm05.wnsmpp (mgr.14195) 144 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:27.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:27 vm09 bash[21099]: audit 2026-03-10T07:20:26.006101+0000 mgr.vm05.wnsmpp (mgr.14195) 144 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:27.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:27 vm05 bash[17520]: audit 2026-03-10T07:20:26.006101+0000 mgr.vm05.wnsmpp (mgr.14195) 144 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:27.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:27 vm05 bash[17520]: audit 2026-03-10T07:20:26.006101+0000 mgr.vm05.wnsmpp (mgr.14195) 144 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:28.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:28 vm09 bash[21099]: cluster 2026-03-10T07:20:26.657464+0000 mgr.vm05.wnsmpp (mgr.14195) 145 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:28.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:28 vm09 bash[21099]: cluster 2026-03-10T07:20:26.657464+0000 mgr.vm05.wnsmpp (mgr.14195) 145 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:28.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:28 vm09 bash[21099]: audit 2026-03-10T07:20:27.674749+0000 mon.vm05 (mon.0) 658 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:28.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:28 vm09 bash[21099]: audit 2026-03-10T07:20:27.674749+0000 mon.vm05 (mon.0) 658 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:28.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:28 vm05 bash[17520]: cluster 2026-03-10T07:20:26.657464+0000 mgr.vm05.wnsmpp (mgr.14195) 145 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:28.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:28 vm05 bash[17520]: cluster 2026-03-10T07:20:26.657464+0000 mgr.vm05.wnsmpp (mgr.14195) 145 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:28.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:28 vm05 bash[17520]: audit 2026-03-10T07:20:27.674749+0000 mon.vm05 (mon.0) 658 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:28.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:28 vm05 bash[17520]: audit 2026-03-10T07:20:27.674749+0000 mon.vm05 (mon.0) 658 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:30.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:30 vm09 bash[21099]: cluster 2026-03-10T07:20:28.657700+0000 mgr.vm05.wnsmpp (mgr.14195) 146 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:30.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:30 vm09 bash[21099]: cluster 2026-03-10T07:20:28.657700+0000 mgr.vm05.wnsmpp (mgr.14195) 146 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:30.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:30 vm05 bash[17520]: cluster 2026-03-10T07:20:28.657700+0000 mgr.vm05.wnsmpp (mgr.14195) 146 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:30.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:30 vm05 bash[17520]: cluster 2026-03-10T07:20:28.657700+0000 mgr.vm05.wnsmpp (mgr.14195) 146 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:30.849 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: cluster 2026-03-10T07:20:30.657956+0000 mgr.vm05.wnsmpp (mgr.14195) 147 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: cluster 2026-03-10T07:20:30.657956+0000 mgr.vm05.wnsmpp (mgr.14195) 147 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.102552+0000 mon.vm05 (mon.0) 659 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.102552+0000 mon.vm05 (mon.0) 659 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.107392+0000 mon.vm05 (mon.0) 660 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.107392+0000 mon.vm05 (mon.0) 660 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.177652+0000 mgr.vm05.wnsmpp (mgr.14195) 148 : audit [DBG] from='client.14470 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "foofs", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.177652+0000 mgr.vm05.wnsmpp (mgr.14195) 148 : audit [DBG] from='client.14470 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "foofs", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.179255+0000 mon.vm05 (mon.0) 661 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool create", "pool": "cephfs.foofs.meta"}]: dispatch 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.179255+0000 mon.vm05 (mon.0) 661 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool create", "pool": "cephfs.foofs.meta"}]: dispatch 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.371957+0000 mon.vm05 (mon.0) 662 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.371957+0000 mon.vm05 (mon.0) 662 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.377179+0000 mon.vm05 (mon.0) 663 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.377179+0000 mon.vm05 (mon.0) 663 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.537791+0000 mon.vm05 (mon.0) 664 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.537791+0000 mon.vm05 (mon.0) 664 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.543996+0000 mon.vm05 (mon.0) 665 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.543996+0000 mon.vm05 (mon.0) 665 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.806971+0000 mon.vm05 (mon.0) 666 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.806971+0000 mon.vm05 (mon.0) 666 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.811411+0000 mon.vm05 (mon.0) 667 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:32 vm09 bash[21099]: audit 2026-03-10T07:20:31.811411+0000 mon.vm05 (mon.0) 667 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: cluster 2026-03-10T07:20:30.657956+0000 mgr.vm05.wnsmpp (mgr.14195) 147 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: cluster 2026-03-10T07:20:30.657956+0000 mgr.vm05.wnsmpp (mgr.14195) 147 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.102552+0000 mon.vm05 (mon.0) 659 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.102552+0000 mon.vm05 (mon.0) 659 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.107392+0000 mon.vm05 (mon.0) 660 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.107392+0000 mon.vm05 (mon.0) 660 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.177652+0000 mgr.vm05.wnsmpp (mgr.14195) 148 : audit [DBG] from='client.14470 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "foofs", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.177652+0000 mgr.vm05.wnsmpp (mgr.14195) 148 : audit [DBG] from='client.14470 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "foofs", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.179255+0000 mon.vm05 (mon.0) 661 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool create", "pool": "cephfs.foofs.meta"}]: dispatch 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.179255+0000 mon.vm05 (mon.0) 661 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool create", "pool": "cephfs.foofs.meta"}]: dispatch 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.371957+0000 mon.vm05 (mon.0) 662 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.371957+0000 mon.vm05 (mon.0) 662 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.377179+0000 mon.vm05 (mon.0) 663 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.377179+0000 mon.vm05 (mon.0) 663 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.537791+0000 mon.vm05 (mon.0) 664 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.537791+0000 mon.vm05 (mon.0) 664 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.543996+0000 mon.vm05 (mon.0) 665 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.543996+0000 mon.vm05 (mon.0) 665 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.806971+0000 mon.vm05 (mon.0) 666 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.806971+0000 mon.vm05 (mon.0) 666 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.811411+0000 mon.vm05 (mon.0) 667 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:32.463 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:32 vm05 bash[17520]: audit 2026-03-10T07:20:31.811411+0000 mon.vm05 (mon.0) 667 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:33.226 INFO:teuthology.run_tasks:Running task cephadm.apply... 2026-03-10T07:20:33.230 INFO:tasks.cephadm:Applying spec(s): placement: count: 2 service_id: foo service_type: nfs spec: port: 12049 --- service_id: nfs.foo service_type: ingress spec: backend_service: nfs.foo frontend_port: 2049 monitor_port: 9002 virtual_ip: 12.12.1.105/22 2026-03-10T07:20:33.230 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch apply -i - 2026-03-10T07:20:33.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:33 vm09 bash[21099]: audit 2026-03-10T07:20:32.111864+0000 mon.vm05 (mon.0) 668 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool create", "pool": "cephfs.foofs.meta"}]': finished 2026-03-10T07:20:33.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:33 vm09 bash[21099]: audit 2026-03-10T07:20:32.111864+0000 mon.vm05 (mon.0) 668 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool create", "pool": "cephfs.foofs.meta"}]': finished 2026-03-10T07:20:33.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:33 vm09 bash[21099]: cluster 2026-03-10T07:20:32.114678+0000 mon.vm05 (mon.0) 669 : cluster [DBG] osdmap e24: 8 total, 8 up, 8 in 2026-03-10T07:20:33.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:33 vm09 bash[21099]: cluster 2026-03-10T07:20:32.114678+0000 mon.vm05 (mon.0) 669 : cluster [DBG] osdmap e24: 8 total, 8 up, 8 in 2026-03-10T07:20:33.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:33 vm09 bash[21099]: audit 2026-03-10T07:20:32.116413+0000 mon.vm05 (mon.0) 670 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.foofs.data"}]: dispatch 2026-03-10T07:20:33.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:33 vm09 bash[21099]: audit 2026-03-10T07:20:32.116413+0000 mon.vm05 (mon.0) 670 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.foofs.data"}]: dispatch 2026-03-10T07:20:33.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:33 vm05 bash[17520]: audit 2026-03-10T07:20:32.111864+0000 mon.vm05 (mon.0) 668 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool create", "pool": "cephfs.foofs.meta"}]': finished 2026-03-10T07:20:33.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:33 vm05 bash[17520]: audit 2026-03-10T07:20:32.111864+0000 mon.vm05 (mon.0) 668 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool create", "pool": "cephfs.foofs.meta"}]': finished 2026-03-10T07:20:33.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:33 vm05 bash[17520]: cluster 2026-03-10T07:20:32.114678+0000 mon.vm05 (mon.0) 669 : cluster [DBG] osdmap e24: 8 total, 8 up, 8 in 2026-03-10T07:20:33.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:33 vm05 bash[17520]: cluster 2026-03-10T07:20:32.114678+0000 mon.vm05 (mon.0) 669 : cluster [DBG] osdmap e24: 8 total, 8 up, 8 in 2026-03-10T07:20:33.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:33 vm05 bash[17520]: audit 2026-03-10T07:20:32.116413+0000 mon.vm05 (mon.0) 670 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.foofs.data"}]: dispatch 2026-03-10T07:20:33.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:33 vm05 bash[17520]: audit 2026-03-10T07:20:32.116413+0000 mon.vm05 (mon.0) 670 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.foofs.data"}]: dispatch 2026-03-10T07:20:33.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:33 vm05 bash[17520]: debug 2026-03-10T07:20:33.136+0000 7fdc09047640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:32.658214+0000 mgr.vm05.wnsmpp (mgr.14195) 149 : cluster [DBG] pgmap v97: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:32.658214+0000 mgr.vm05.wnsmpp (mgr.14195) 149 : cluster [DBG] pgmap v97: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:33.111974+0000 mon.vm05 (mon.0) 671 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:33.111974+0000 mon.vm05 (mon.0) 671 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: audit 2026-03-10T07:20:33.120453+0000 mon.vm05 (mon.0) 672 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.foofs.data"}]': finished 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: audit 2026-03-10T07:20:33.120453+0000 mon.vm05 (mon.0) 672 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.foofs.data"}]': finished 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:33.132611+0000 mon.vm05 (mon.0) 673 : cluster [DBG] osdmap e25: 8 total, 8 up, 8 in 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:33.132611+0000 mon.vm05 (mon.0) 673 : cluster [DBG] osdmap e25: 8 total, 8 up, 8 in 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: audit 2026-03-10T07:20:33.141243+0000 mon.vm05 (mon.0) 674 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "fs new", "fs_name": "foofs", "metadata": "cephfs.foofs.meta", "data": "cephfs.foofs.data"}]: dispatch 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: audit 2026-03-10T07:20:33.141243+0000 mon.vm05 (mon.0) 674 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "fs new", "fs_name": "foofs", "metadata": "cephfs.foofs.meta", "data": "cephfs.foofs.data"}]: dispatch 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:33.141540+0000 mon.vm05 (mon.0) 675 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:33.141540+0000 mon.vm05 (mon.0) 675 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:33.141547+0000 mon.vm05 (mon.0) 676 : cluster [WRN] Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:33.141547+0000 mon.vm05 (mon.0) 676 : cluster [WRN] Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: audit 2026-03-10T07:20:33.157665+0000 mon.vm05 (mon.0) 677 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "fs new", "fs_name": "foofs", "metadata": "cephfs.foofs.meta", "data": "cephfs.foofs.data"}]': finished 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: audit 2026-03-10T07:20:33.157665+0000 mon.vm05 (mon.0) 677 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "fs new", "fs_name": "foofs", "metadata": "cephfs.foofs.meta", "data": "cephfs.foofs.data"}]': finished 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:33.159699+0000 mon.vm05 (mon.0) 678 : cluster [DBG] osdmap e26: 8 total, 8 up, 8 in 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:33.159699+0000 mon.vm05 (mon.0) 678 : cluster [DBG] osdmap e26: 8 total, 8 up, 8 in 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:33.159926+0000 mon.vm05 (mon.0) 679 : cluster [DBG] fsmap foofs:0 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cluster 2026-03-10T07:20:33.159926+0000 mon.vm05 (mon.0) 679 : cluster [DBG] fsmap foofs:0 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cephadm 2026-03-10T07:20:33.160983+0000 mgr.vm05.wnsmpp (mgr.14195) 150 : cephadm [INF] Saving service mds.foofs spec with placement count:2 2026-03-10T07:20:34.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: cephadm 2026-03-10T07:20:33.160983+0000 mgr.vm05.wnsmpp (mgr.14195) 150 : cephadm [INF] Saving service mds.foofs spec with placement count:2 2026-03-10T07:20:34.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: audit 2026-03-10T07:20:33.167414+0000 mon.vm05 (mon.0) 680 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:34.425 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:34 vm09 bash[21099]: audit 2026-03-10T07:20:33.167414+0000 mon.vm05 (mon.0) 680 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:34.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:32.658214+0000 mgr.vm05.wnsmpp (mgr.14195) 149 : cluster [DBG] pgmap v97: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:34.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:32.658214+0000 mgr.vm05.wnsmpp (mgr.14195) 149 : cluster [DBG] pgmap v97: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:33.111974+0000 mon.vm05 (mon.0) 671 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:33.111974+0000 mon.vm05 (mon.0) 671 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: audit 2026-03-10T07:20:33.120453+0000 mon.vm05 (mon.0) 672 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.foofs.data"}]': finished 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: audit 2026-03-10T07:20:33.120453+0000 mon.vm05 (mon.0) 672 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.foofs.data"}]': finished 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:33.132611+0000 mon.vm05 (mon.0) 673 : cluster [DBG] osdmap e25: 8 total, 8 up, 8 in 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:33.132611+0000 mon.vm05 (mon.0) 673 : cluster [DBG] osdmap e25: 8 total, 8 up, 8 in 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: audit 2026-03-10T07:20:33.141243+0000 mon.vm05 (mon.0) 674 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "fs new", "fs_name": "foofs", "metadata": "cephfs.foofs.meta", "data": "cephfs.foofs.data"}]: dispatch 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: audit 2026-03-10T07:20:33.141243+0000 mon.vm05 (mon.0) 674 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "fs new", "fs_name": "foofs", "metadata": "cephfs.foofs.meta", "data": "cephfs.foofs.data"}]: dispatch 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:33.141540+0000 mon.vm05 (mon.0) 675 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:33.141540+0000 mon.vm05 (mon.0) 675 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:33.141547+0000 mon.vm05 (mon.0) 676 : cluster [WRN] Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:33.141547+0000 mon.vm05 (mon.0) 676 : cluster [WRN] Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: audit 2026-03-10T07:20:33.157665+0000 mon.vm05 (mon.0) 677 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "fs new", "fs_name": "foofs", "metadata": "cephfs.foofs.meta", "data": "cephfs.foofs.data"}]': finished 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: audit 2026-03-10T07:20:33.157665+0000 mon.vm05 (mon.0) 677 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "fs new", "fs_name": "foofs", "metadata": "cephfs.foofs.meta", "data": "cephfs.foofs.data"}]': finished 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:33.159699+0000 mon.vm05 (mon.0) 678 : cluster [DBG] osdmap e26: 8 total, 8 up, 8 in 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:33.159699+0000 mon.vm05 (mon.0) 678 : cluster [DBG] osdmap e26: 8 total, 8 up, 8 in 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:33.159926+0000 mon.vm05 (mon.0) 679 : cluster [DBG] fsmap foofs:0 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cluster 2026-03-10T07:20:33.159926+0000 mon.vm05 (mon.0) 679 : cluster [DBG] fsmap foofs:0 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cephadm 2026-03-10T07:20:33.160983+0000 mgr.vm05.wnsmpp (mgr.14195) 150 : cephadm [INF] Saving service mds.foofs spec with placement count:2 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: cephadm 2026-03-10T07:20:33.160983+0000 mgr.vm05.wnsmpp (mgr.14195) 150 : cephadm [INF] Saving service mds.foofs spec with placement count:2 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: audit 2026-03-10T07:20:33.167414+0000 mon.vm05 (mon.0) 680 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:34.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:34 vm05 bash[17520]: audit 2026-03-10T07:20:33.167414+0000 mon.vm05 (mon.0) 680 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:35.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:35 vm09 bash[21099]: cluster 2026-03-10T07:20:34.168946+0000 mon.vm05 (mon.0) 681 : cluster [DBG] osdmap e27: 8 total, 8 up, 8 in 2026-03-10T07:20:35.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:35 vm09 bash[21099]: cluster 2026-03-10T07:20:34.168946+0000 mon.vm05 (mon.0) 681 : cluster [DBG] osdmap e27: 8 total, 8 up, 8 in 2026-03-10T07:20:35.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:35 vm05 bash[17520]: cluster 2026-03-10T07:20:34.168946+0000 mon.vm05 (mon.0) 681 : cluster [DBG] osdmap e27: 8 total, 8 up, 8 in 2026-03-10T07:20:35.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:35 vm05 bash[17520]: cluster 2026-03-10T07:20:34.168946+0000 mon.vm05 (mon.0) 681 : cluster [DBG] osdmap e27: 8 total, 8 up, 8 in 2026-03-10T07:20:36.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:36 vm05 bash[17520]: cluster 2026-03-10T07:20:34.658515+0000 mgr.vm05.wnsmpp (mgr.14195) 151 : cluster [DBG] pgmap v101: 65 pgs: 17 active+clean, 16 creating+peering, 32 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:36.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:36 vm05 bash[17520]: cluster 2026-03-10T07:20:34.658515+0000 mgr.vm05.wnsmpp (mgr.14195) 151 : cluster [DBG] pgmap v101: 65 pgs: 17 active+clean, 16 creating+peering, 32 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:36.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:36 vm05 bash[17520]: cluster 2026-03-10T07:20:35.160486+0000 mon.vm05 (mon.0) 682 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:20:36.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:36 vm05 bash[17520]: cluster 2026-03-10T07:20:35.160486+0000 mon.vm05 (mon.0) 682 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:20:36.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:36 vm05 bash[17520]: cluster 2026-03-10T07:20:35.171528+0000 mon.vm05 (mon.0) 683 : cluster [DBG] osdmap e28: 8 total, 8 up, 8 in 2026-03-10T07:20:36.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:36 vm05 bash[17520]: cluster 2026-03-10T07:20:35.171528+0000 mon.vm05 (mon.0) 683 : cluster [DBG] osdmap e28: 8 total, 8 up, 8 in 2026-03-10T07:20:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:36 vm09 bash[21099]: cluster 2026-03-10T07:20:34.658515+0000 mgr.vm05.wnsmpp (mgr.14195) 151 : cluster [DBG] pgmap v101: 65 pgs: 17 active+clean, 16 creating+peering, 32 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:36 vm09 bash[21099]: cluster 2026-03-10T07:20:34.658515+0000 mgr.vm05.wnsmpp (mgr.14195) 151 : cluster [DBG] pgmap v101: 65 pgs: 17 active+clean, 16 creating+peering, 32 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:36 vm09 bash[21099]: cluster 2026-03-10T07:20:35.160486+0000 mon.vm05 (mon.0) 682 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:20:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:36 vm09 bash[21099]: cluster 2026-03-10T07:20:35.160486+0000 mon.vm05 (mon.0) 682 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:20:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:36 vm09 bash[21099]: cluster 2026-03-10T07:20:35.171528+0000 mon.vm05 (mon.0) 683 : cluster [DBG] osdmap e28: 8 total, 8 up, 8 in 2026-03-10T07:20:36.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:36 vm09 bash[21099]: cluster 2026-03-10T07:20:35.171528+0000 mon.vm05 (mon.0) 683 : cluster [DBG] osdmap e28: 8 total, 8 up, 8 in 2026-03-10T07:20:37.911 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:20:38.186 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled nfs.foo update... 2026-03-10T07:20:38.186 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled ingress.nfs.foo update... 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: cluster 2026-03-10T07:20:36.658779+0000 mgr.vm05.wnsmpp (mgr.14195) 152 : cluster [DBG] pgmap v103: 65 pgs: 33 active+clean, 16 creating+peering, 16 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: cluster 2026-03-10T07:20:36.658779+0000 mgr.vm05.wnsmpp (mgr.14195) 152 : cluster [DBG] pgmap v103: 65 pgs: 33 active+clean, 16 creating+peering, 16 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.049023+0000 mon.vm05 (mon.0) 684 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.049023+0000 mon.vm05 (mon.0) 684 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.053400+0000 mon.vm05 (mon.0) 685 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.053400+0000 mon.vm05 (mon.0) 685 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.436836+0000 mon.vm05 (mon.0) 686 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.436836+0000 mon.vm05 (mon.0) 686 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.441540+0000 mon.vm05 (mon.0) 687 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.441540+0000 mon.vm05 (mon.0) 687 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.442319+0000 mon.vm05 (mon.0) 688 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.442319+0000 mon.vm05 (mon.0) 688 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.442797+0000 mon.vm05 (mon.0) 689 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.442797+0000 mon.vm05 (mon.0) 689 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.446133+0000 mon.vm05 (mon.0) 690 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.446133+0000 mon.vm05 (mon.0) 690 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.447399+0000 mon.vm05 (mon.0) 691 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.447399+0000 mon.vm05 (mon.0) 691 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.450645+0000 mon.vm05 (mon.0) 692 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm09.kuyylf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.450645+0000 mon.vm05 (mon.0) 692 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm09.kuyylf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.452733+0000 mon.vm05 (mon.0) 693 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm09.kuyylf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.452733+0000 mon.vm05 (mon.0) 693 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm09.kuyylf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.454278+0000 mon.vm05 (mon.0) 694 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:38.197 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 bash[17520]: audit 2026-03-10T07:20:37.454278+0000 mon.vm05 (mon.0) 694 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:38.211 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:37 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:38.211 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: cluster 2026-03-10T07:20:36.658779+0000 mgr.vm05.wnsmpp (mgr.14195) 152 : cluster [DBG] pgmap v103: 65 pgs: 33 active+clean, 16 creating+peering, 16 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: cluster 2026-03-10T07:20:36.658779+0000 mgr.vm05.wnsmpp (mgr.14195) 152 : cluster [DBG] pgmap v103: 65 pgs: 33 active+clean, 16 creating+peering, 16 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.049023+0000 mon.vm05 (mon.0) 684 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.049023+0000 mon.vm05 (mon.0) 684 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.053400+0000 mon.vm05 (mon.0) 685 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.053400+0000 mon.vm05 (mon.0) 685 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.436836+0000 mon.vm05 (mon.0) 686 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.436836+0000 mon.vm05 (mon.0) 686 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.441540+0000 mon.vm05 (mon.0) 687 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.441540+0000 mon.vm05 (mon.0) 687 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.442319+0000 mon.vm05 (mon.0) 688 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.442319+0000 mon.vm05 (mon.0) 688 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.442797+0000 mon.vm05 (mon.0) 689 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.442797+0000 mon.vm05 (mon.0) 689 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.446133+0000 mon.vm05 (mon.0) 690 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.446133+0000 mon.vm05 (mon.0) 690 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.447399+0000 mon.vm05 (mon.0) 691 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.447399+0000 mon.vm05 (mon.0) 691 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.450645+0000 mon.vm05 (mon.0) 692 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm09.kuyylf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.450645+0000 mon.vm05 (mon.0) 692 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm09.kuyylf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.452733+0000 mon.vm05 (mon.0) 693 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm09.kuyylf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.452733+0000 mon.vm05 (mon.0) 693 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm09.kuyylf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.454278+0000 mon.vm05 (mon.0) 694 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:38.212 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 bash[21099]: audit 2026-03-10T07:20:37.454278+0000 mon.vm05 (mon.0) 694 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:38.242 INFO:teuthology.run_tasks:Running task cephadm.wait_for_service... 2026-03-10T07:20:38.244 INFO:tasks.cephadm:Waiting for ceph service nfs.foo to start (timeout 300)... 2026-03-10T07:20:38.244 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch ls -f json 2026-03-10T07:20:38.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:38 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:39.112 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:38 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:39.112 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:39.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: cephadm 2026-03-10T07:20:37.454755+0000 mgr.vm05.wnsmpp (mgr.14195) 153 : cephadm [INF] Deploying daemon mds.foofs.vm09.kuyylf on vm09 2026-03-10T07:20:39.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: cephadm 2026-03-10T07:20:37.454755+0000 mgr.vm05.wnsmpp (mgr.14195) 153 : cephadm [INF] Deploying daemon mds.foofs.vm09.kuyylf on vm09 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.177421+0000 mgr.vm05.wnsmpp (mgr.14195) 154 : audit [DBG] from='client.14474 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.177421+0000 mgr.vm05.wnsmpp (mgr.14195) 154 : audit [DBG] from='client.14474 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: cephadm 2026-03-10T07:20:38.179273+0000 mgr.vm05.wnsmpp (mgr.14195) 155 : cephadm [INF] Saving service nfs.foo spec with placement count:2 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: cephadm 2026-03-10T07:20:38.179273+0000 mgr.vm05.wnsmpp (mgr.14195) 155 : cephadm [INF] Saving service nfs.foo spec with placement count:2 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.183345+0000 mon.vm05 (mon.0) 695 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.183345+0000 mon.vm05 (mon.0) 695 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: cephadm 2026-03-10T07:20:38.184035+0000 mgr.vm05.wnsmpp (mgr.14195) 156 : cephadm [INF] Saving service ingress.nfs.foo spec with placement count:2 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: cephadm 2026-03-10T07:20:38.184035+0000 mgr.vm05.wnsmpp (mgr.14195) 156 : cephadm [INF] Saving service ingress.nfs.foo spec with placement count:2 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.187264+0000 mon.vm05 (mon.0) 696 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.187264+0000 mon.vm05 (mon.0) 696 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.323185+0000 mon.vm05 (mon.0) 697 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.323185+0000 mon.vm05 (mon.0) 697 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.328282+0000 mon.vm05 (mon.0) 698 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.328282+0000 mon.vm05 (mon.0) 698 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.334175+0000 mon.vm05 (mon.0) 699 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.334175+0000 mon.vm05 (mon.0) 699 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.334901+0000 mon.vm05 (mon.0) 700 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm05.oxovsp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.334901+0000 mon.vm05 (mon.0) 700 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm05.oxovsp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.343582+0000 mon.vm05 (mon.0) 701 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm05.oxovsp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.343582+0000 mon.vm05 (mon.0) 701 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm05.oxovsp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.345788+0000 mon.vm05 (mon.0) 702 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:38.345788+0000 mon.vm05 (mon.0) 702 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: cephadm 2026-03-10T07:20:38.346485+0000 mgr.vm05.wnsmpp (mgr.14195) 157 : cephadm [INF] Deploying daemon mds.foofs.vm05.oxovsp on vm05 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: cephadm 2026-03-10T07:20:38.346485+0000 mgr.vm05.wnsmpp (mgr.14195) 157 : cephadm [INF] Deploying daemon mds.foofs.vm05.oxovsp on vm05 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:39.139302+0000 mon.vm05 (mon.0) 703 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:39.139302+0000 mon.vm05 (mon.0) 703 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:39.143472+0000 mon.vm05 (mon.0) 704 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:39.143472+0000 mon.vm05 (mon.0) 704 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:39.147201+0000 mon.vm05 (mon.0) 705 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:39.147201+0000 mon.vm05 (mon.0) 705 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:39.151748+0000 mon.vm05 (mon.0) 706 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:39.151748+0000 mon.vm05 (mon.0) 706 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:39.164024+0000 mon.vm05 (mon.0) 707 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:39.164024+0000 mon.vm05 (mon.0) 707 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:39.177029+0000 mon.vm05 (mon.0) 708 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:39.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:39 vm05 bash[17520]: audit 2026-03-10T07:20:39.177029+0000 mon.vm05 (mon.0) 708 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: cephadm 2026-03-10T07:20:37.454755+0000 mgr.vm05.wnsmpp (mgr.14195) 153 : cephadm [INF] Deploying daemon mds.foofs.vm09.kuyylf on vm09 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: cephadm 2026-03-10T07:20:37.454755+0000 mgr.vm05.wnsmpp (mgr.14195) 153 : cephadm [INF] Deploying daemon mds.foofs.vm09.kuyylf on vm09 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.177421+0000 mgr.vm05.wnsmpp (mgr.14195) 154 : audit [DBG] from='client.14474 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.177421+0000 mgr.vm05.wnsmpp (mgr.14195) 154 : audit [DBG] from='client.14474 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: cephadm 2026-03-10T07:20:38.179273+0000 mgr.vm05.wnsmpp (mgr.14195) 155 : cephadm [INF] Saving service nfs.foo spec with placement count:2 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: cephadm 2026-03-10T07:20:38.179273+0000 mgr.vm05.wnsmpp (mgr.14195) 155 : cephadm [INF] Saving service nfs.foo spec with placement count:2 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.183345+0000 mon.vm05 (mon.0) 695 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.183345+0000 mon.vm05 (mon.0) 695 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: cephadm 2026-03-10T07:20:38.184035+0000 mgr.vm05.wnsmpp (mgr.14195) 156 : cephadm [INF] Saving service ingress.nfs.foo spec with placement count:2 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: cephadm 2026-03-10T07:20:38.184035+0000 mgr.vm05.wnsmpp (mgr.14195) 156 : cephadm [INF] Saving service ingress.nfs.foo spec with placement count:2 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.187264+0000 mon.vm05 (mon.0) 696 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.187264+0000 mon.vm05 (mon.0) 696 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.323185+0000 mon.vm05 (mon.0) 697 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.323185+0000 mon.vm05 (mon.0) 697 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.328282+0000 mon.vm05 (mon.0) 698 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.328282+0000 mon.vm05 (mon.0) 698 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.334175+0000 mon.vm05 (mon.0) 699 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.334175+0000 mon.vm05 (mon.0) 699 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.334901+0000 mon.vm05 (mon.0) 700 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm05.oxovsp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.334901+0000 mon.vm05 (mon.0) 700 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm05.oxovsp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.343582+0000 mon.vm05 (mon.0) 701 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm05.oxovsp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.343582+0000 mon.vm05 (mon.0) 701 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm05.oxovsp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.345788+0000 mon.vm05 (mon.0) 702 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:38.345788+0000 mon.vm05 (mon.0) 702 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: cephadm 2026-03-10T07:20:38.346485+0000 mgr.vm05.wnsmpp (mgr.14195) 157 : cephadm [INF] Deploying daemon mds.foofs.vm05.oxovsp on vm05 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: cephadm 2026-03-10T07:20:38.346485+0000 mgr.vm05.wnsmpp (mgr.14195) 157 : cephadm [INF] Deploying daemon mds.foofs.vm05.oxovsp on vm05 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:39.139302+0000 mon.vm05 (mon.0) 703 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:39.139302+0000 mon.vm05 (mon.0) 703 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:39.143472+0000 mon.vm05 (mon.0) 704 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:39.143472+0000 mon.vm05 (mon.0) 704 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:39.147201+0000 mon.vm05 (mon.0) 705 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:39.147201+0000 mon.vm05 (mon.0) 705 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:39.151748+0000 mon.vm05 (mon.0) 706 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:39.151748+0000 mon.vm05 (mon.0) 706 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:39.164024+0000 mon.vm05 (mon.0) 707 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:39.164024+0000 mon.vm05 (mon.0) 707 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:39.177029+0000 mon.vm05 (mon.0) 708 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:39.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:39 vm09 bash[21099]: audit 2026-03-10T07:20:39.177029+0000 mon.vm05 (mon.0) 708 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:38.659108+0000 mgr.vm05.wnsmpp (mgr.14195) 158 : cluster [DBG] pgmap v104: 65 pgs: 54 active+clean, 11 creating+peering; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:38.659108+0000 mgr.vm05.wnsmpp (mgr.14195) 158 : cluster [DBG] pgmap v104: 65 pgs: 54 active+clean, 11 creating+peering; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.352337+0000 mon.vm05 (mon.0) 709 : cluster [DBG] mds.? [v2:192.168.123.109:6832/872230326,v1:192.168.123.109:6833/872230326] up:boot 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.352337+0000 mon.vm05 (mon.0) 709 : cluster [DBG] mds.? [v2:192.168.123.109:6832/872230326,v1:192.168.123.109:6833/872230326] up:boot 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.352472+0000 mon.vm05 (mon.0) 710 : cluster [DBG] mds.? [v2:192.168.123.105:6834/1496181166,v1:192.168.123.105:6835/1496181166] up:boot 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.352472+0000 mon.vm05 (mon.0) 710 : cluster [DBG] mds.? [v2:192.168.123.105:6834/1496181166,v1:192.168.123.105:6835/1496181166] up:boot 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.352623+0000 mon.vm05 (mon.0) 711 : cluster [INF] daemon mds.foofs.vm05.oxovsp assigned to filesystem foofs as rank 0 (now has 1 ranks) 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.352623+0000 mon.vm05 (mon.0) 711 : cluster [INF] daemon mds.foofs.vm05.oxovsp assigned to filesystem foofs as rank 0 (now has 1 ranks) 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.352718+0000 mon.vm05 (mon.0) 712 : cluster [INF] Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.352718+0000 mon.vm05 (mon.0) 712 : cluster [INF] Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.352771+0000 mon.vm05 (mon.0) 713 : cluster [INF] Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.352771+0000 mon.vm05 (mon.0) 713 : cluster [INF] Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.352818+0000 mon.vm05 (mon.0) 714 : cluster [INF] Cluster is now healthy 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.352818+0000 mon.vm05 (mon.0) 714 : cluster [INF] Cluster is now healthy 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.353940+0000 mon.vm05 (mon.0) 715 : cluster [DBG] fsmap foofs:0 2 up:standby 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.353940+0000 mon.vm05 (mon.0) 715 : cluster [DBG] fsmap foofs:0 2 up:standby 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: audit 2026-03-10T07:20:39.354066+0000 mon.vm05 (mon.0) 716 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata", "who": "foofs.vm05.oxovsp"}]: dispatch 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: audit 2026-03-10T07:20:39.354066+0000 mon.vm05 (mon.0) 716 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata", "who": "foofs.vm05.oxovsp"}]: dispatch 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: audit 2026-03-10T07:20:39.356303+0000 mon.vm05 (mon.0) 717 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata", "who": "foofs.vm09.kuyylf"}]: dispatch 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: audit 2026-03-10T07:20:39.356303+0000 mon.vm05 (mon.0) 717 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata", "who": "foofs.vm09.kuyylf"}]: dispatch 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.367027+0000 mon.vm05 (mon.0) 718 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:creating} 1 up:standby 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.367027+0000 mon.vm05 (mon.0) 718 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:creating} 1 up:standby 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.403480+0000 mon.vm05 (mon.0) 719 : cluster [INF] daemon mds.foofs.vm05.oxovsp is now active in filesystem foofs as rank 0 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: cluster 2026-03-10T07:20:39.403480+0000 mon.vm05 (mon.0) 719 : cluster [INF] daemon mds.foofs.vm05.oxovsp is now active in filesystem foofs as rank 0 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: audit 2026-03-10T07:20:39.538683+0000 mon.vm05 (mon.0) 720 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:40.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:40 vm09 bash[21099]: audit 2026-03-10T07:20:39.538683+0000 mon.vm05 (mon.0) 720 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:40.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:38.659108+0000 mgr.vm05.wnsmpp (mgr.14195) 158 : cluster [DBG] pgmap v104: 65 pgs: 54 active+clean, 11 creating+peering; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:40.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:38.659108+0000 mgr.vm05.wnsmpp (mgr.14195) 158 : cluster [DBG] pgmap v104: 65 pgs: 54 active+clean, 11 creating+peering; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:40.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.352337+0000 mon.vm05 (mon.0) 709 : cluster [DBG] mds.? [v2:192.168.123.109:6832/872230326,v1:192.168.123.109:6833/872230326] up:boot 2026-03-10T07:20:40.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.352337+0000 mon.vm05 (mon.0) 709 : cluster [DBG] mds.? [v2:192.168.123.109:6832/872230326,v1:192.168.123.109:6833/872230326] up:boot 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.352472+0000 mon.vm05 (mon.0) 710 : cluster [DBG] mds.? [v2:192.168.123.105:6834/1496181166,v1:192.168.123.105:6835/1496181166] up:boot 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.352472+0000 mon.vm05 (mon.0) 710 : cluster [DBG] mds.? [v2:192.168.123.105:6834/1496181166,v1:192.168.123.105:6835/1496181166] up:boot 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.352623+0000 mon.vm05 (mon.0) 711 : cluster [INF] daemon mds.foofs.vm05.oxovsp assigned to filesystem foofs as rank 0 (now has 1 ranks) 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.352623+0000 mon.vm05 (mon.0) 711 : cluster [INF] daemon mds.foofs.vm05.oxovsp assigned to filesystem foofs as rank 0 (now has 1 ranks) 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.352718+0000 mon.vm05 (mon.0) 712 : cluster [INF] Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.352718+0000 mon.vm05 (mon.0) 712 : cluster [INF] Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.352771+0000 mon.vm05 (mon.0) 713 : cluster [INF] Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.352771+0000 mon.vm05 (mon.0) 713 : cluster [INF] Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.352818+0000 mon.vm05 (mon.0) 714 : cluster [INF] Cluster is now healthy 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.352818+0000 mon.vm05 (mon.0) 714 : cluster [INF] Cluster is now healthy 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.353940+0000 mon.vm05 (mon.0) 715 : cluster [DBG] fsmap foofs:0 2 up:standby 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.353940+0000 mon.vm05 (mon.0) 715 : cluster [DBG] fsmap foofs:0 2 up:standby 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: audit 2026-03-10T07:20:39.354066+0000 mon.vm05 (mon.0) 716 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata", "who": "foofs.vm05.oxovsp"}]: dispatch 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: audit 2026-03-10T07:20:39.354066+0000 mon.vm05 (mon.0) 716 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata", "who": "foofs.vm05.oxovsp"}]: dispatch 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: audit 2026-03-10T07:20:39.356303+0000 mon.vm05 (mon.0) 717 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata", "who": "foofs.vm09.kuyylf"}]: dispatch 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: audit 2026-03-10T07:20:39.356303+0000 mon.vm05 (mon.0) 717 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "mds metadata", "who": "foofs.vm09.kuyylf"}]: dispatch 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.367027+0000 mon.vm05 (mon.0) 718 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:creating} 1 up:standby 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.367027+0000 mon.vm05 (mon.0) 718 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:creating} 1 up:standby 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.403480+0000 mon.vm05 (mon.0) 719 : cluster [INF] daemon mds.foofs.vm05.oxovsp is now active in filesystem foofs as rank 0 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: cluster 2026-03-10T07:20:39.403480+0000 mon.vm05 (mon.0) 719 : cluster [INF] daemon mds.foofs.vm05.oxovsp is now active in filesystem foofs as rank 0 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: audit 2026-03-10T07:20:39.538683+0000 mon.vm05 (mon.0) 720 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:40 vm05 bash[17520]: audit 2026-03-10T07:20:39.538683+0000 mon.vm05 (mon.0) 720 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:41.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:41 vm09 bash[21099]: cluster 2026-03-10T07:20:40.376261+0000 mon.vm05 (mon.0) 721 : cluster [DBG] mds.? [v2:192.168.123.105:6834/1496181166,v1:192.168.123.105:6835/1496181166] up:active 2026-03-10T07:20:41.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:41 vm09 bash[21099]: cluster 2026-03-10T07:20:40.376261+0000 mon.vm05 (mon.0) 721 : cluster [DBG] mds.? [v2:192.168.123.105:6834/1496181166,v1:192.168.123.105:6835/1496181166] up:active 2026-03-10T07:20:41.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:41 vm09 bash[21099]: cluster 2026-03-10T07:20:40.376976+0000 mon.vm05 (mon.0) 722 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:active} 1 up:standby 2026-03-10T07:20:41.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:41 vm09 bash[21099]: cluster 2026-03-10T07:20:40.376976+0000 mon.vm05 (mon.0) 722 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:active} 1 up:standby 2026-03-10T07:20:41.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:41 vm09 bash[21099]: cluster 2026-03-10T07:20:40.380218+0000 mon.vm05 (mon.0) 723 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:active} 1 up:standby 2026-03-10T07:20:41.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:41 vm09 bash[21099]: cluster 2026-03-10T07:20:40.380218+0000 mon.vm05 (mon.0) 723 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:active} 1 up:standby 2026-03-10T07:20:41.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:41 vm05 bash[17520]: cluster 2026-03-10T07:20:40.376261+0000 mon.vm05 (mon.0) 721 : cluster [DBG] mds.? [v2:192.168.123.105:6834/1496181166,v1:192.168.123.105:6835/1496181166] up:active 2026-03-10T07:20:41.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:41 vm05 bash[17520]: cluster 2026-03-10T07:20:40.376261+0000 mon.vm05 (mon.0) 721 : cluster [DBG] mds.? [v2:192.168.123.105:6834/1496181166,v1:192.168.123.105:6835/1496181166] up:active 2026-03-10T07:20:41.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:41 vm05 bash[17520]: cluster 2026-03-10T07:20:40.376976+0000 mon.vm05 (mon.0) 722 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:active} 1 up:standby 2026-03-10T07:20:41.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:41 vm05 bash[17520]: cluster 2026-03-10T07:20:40.376976+0000 mon.vm05 (mon.0) 722 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:active} 1 up:standby 2026-03-10T07:20:41.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:41 vm05 bash[17520]: cluster 2026-03-10T07:20:40.380218+0000 mon.vm05 (mon.0) 723 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:active} 1 up:standby 2026-03-10T07:20:41.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:41 vm05 bash[17520]: cluster 2026-03-10T07:20:40.380218+0000 mon.vm05 (mon.0) 723 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:active} 1 up:standby 2026-03-10T07:20:42.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:42 vm09 bash[21099]: cluster 2026-03-10T07:20:40.659421+0000 mgr.vm05.wnsmpp (mgr.14195) 159 : cluster [DBG] pgmap v105: 65 pgs: 65 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 273 B/s wr, 1 op/s 2026-03-10T07:20:42.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:42 vm09 bash[21099]: cluster 2026-03-10T07:20:40.659421+0000 mgr.vm05.wnsmpp (mgr.14195) 159 : cluster [DBG] pgmap v105: 65 pgs: 65 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 273 B/s wr, 1 op/s 2026-03-10T07:20:42.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:42 vm05 bash[17520]: cluster 2026-03-10T07:20:40.659421+0000 mgr.vm05.wnsmpp (mgr.14195) 159 : cluster [DBG] pgmap v105: 65 pgs: 65 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 273 B/s wr, 1 op/s 2026-03-10T07:20:42.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:42 vm05 bash[17520]: cluster 2026-03-10T07:20:40.659421+0000 mgr.vm05.wnsmpp (mgr.14195) 159 : cluster [DBG] pgmap v105: 65 pgs: 65 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 273 B/s wr, 1 op/s 2026-03-10T07:20:42.918 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:20:43.170 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:20:43.170 INFO:teuthology.orchestra.run.vm05.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T07:16:57.250451Z", "last_refresh": "2026-03-10T07:20:31.532023Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:17:50.632506Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T07:16:55.873736Z", "last_refresh": "2026-03-10T07:20:31.096561Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:51.461998Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T07:16:55.599449Z", "last_refresh": "2026-03-10T07:20:31.096639Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T07:16:56.598903Z", "last_refresh": "2026-03-10T07:20:31.532193Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:20:38.187535Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9002, "virtual_ip": "12.12.1.105/22"}, "status": {"created": "2026-03-10T07:20:38.184040Z", "ports": [2049, 9002], "running": 0, "size": 4, "virtual_ip": "12.12.1.105/22"}}, {"events": ["2026-03-10T07:20:39.165253Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T07:20:33.161016Z", "running": 0, "size": 2}}, {"events": ["2026-03-10T07:17:52.987383Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T07:16:55.321760Z", "last_refresh": "2026-03-10T07:20:31.096667Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:54.139000Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm05:192.168.123.105=vm05", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T07:17:37.480725Z", "last_refresh": "2026-03-10T07:20:31.096614Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:20:38.183710Z service:nfs.foo [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 12049}, "status": {"created": "2026-03-10T07:20:38.179287Z", "ports": [12049], "running": 0, "size": 2}}, {"events": ["2026-03-10T07:17:52.164679Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T07:16:56.934008Z", "last_refresh": "2026-03-10T07:20:31.096721Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:18:12.645291Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T07:18:12.640395Z", "last_refresh": "2026-03-10T07:20:31.096482Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T07:17:54.141548Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T07:16:56.184682Z", "last_refresh": "2026-03-10T07:20:31.532054Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T07:20:43.221 INFO:tasks.cephadm:nfs.foo has 0/2 2026-03-10T07:20:43.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:43 vm05 bash[17520]: audit 2026-03-10T07:20:42.674810+0000 mon.vm05 (mon.0) 724 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:43.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:43 vm05 bash[17520]: audit 2026-03-10T07:20:42.674810+0000 mon.vm05 (mon.0) 724 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:43.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:43 vm09 bash[21099]: audit 2026-03-10T07:20:42.674810+0000 mon.vm05 (mon.0) 724 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:43.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:43 vm09 bash[21099]: audit 2026-03-10T07:20:42.674810+0000 mon.vm05 (mon.0) 724 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:44.222 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch ls -f json 2026-03-10T07:20:44.641 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:44 vm05 bash[17520]: cluster 2026-03-10T07:20:42.659668+0000 mgr.vm05.wnsmpp (mgr.14195) 160 : cluster [DBG] pgmap v106: 65 pgs: 65 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s wr, 1 op/s 2026-03-10T07:20:44.641 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:44 vm05 bash[17520]: cluster 2026-03-10T07:20:42.659668+0000 mgr.vm05.wnsmpp (mgr.14195) 160 : cluster [DBG] pgmap v106: 65 pgs: 65 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s wr, 1 op/s 2026-03-10T07:20:44.641 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:44 vm05 bash[17520]: audit 2026-03-10T07:20:43.169550+0000 mgr.vm05.wnsmpp (mgr.14195) 161 : audit [DBG] from='client.14486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:44.641 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:44 vm05 bash[17520]: audit 2026-03-10T07:20:43.169550+0000 mgr.vm05.wnsmpp (mgr.14195) 161 : audit [DBG] from='client.14486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:44.641 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:44 vm05 bash[17520]: cluster 2026-03-10T07:20:43.397944+0000 mon.vm05 (mon.0) 725 : cluster [DBG] mds.? [v2:192.168.123.109:6832/872230326,v1:192.168.123.109:6833/872230326] up:standby 2026-03-10T07:20:44.641 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:44 vm05 bash[17520]: cluster 2026-03-10T07:20:43.397944+0000 mon.vm05 (mon.0) 725 : cluster [DBG] mds.? [v2:192.168.123.109:6832/872230326,v1:192.168.123.109:6833/872230326] up:standby 2026-03-10T07:20:44.641 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:44 vm05 bash[17520]: cluster 2026-03-10T07:20:43.398052+0000 mon.vm05 (mon.0) 726 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:active} 1 up:standby 2026-03-10T07:20:44.641 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:44 vm05 bash[17520]: cluster 2026-03-10T07:20:43.398052+0000 mon.vm05 (mon.0) 726 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:active} 1 up:standby 2026-03-10T07:20:44.641 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:44 vm05 bash[17520]: audit 2026-03-10T07:20:44.320560+0000 mon.vm05 (mon.0) 727 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:44.641 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:44 vm05 bash[17520]: audit 2026-03-10T07:20:44.320560+0000 mon.vm05 (mon.0) 727 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:44.641 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:44 vm05 bash[17520]: audit 2026-03-10T07:20:44.327476+0000 mon.vm05 (mon.0) 728 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:44.641 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:44 vm05 bash[17520]: audit 2026-03-10T07:20:44.327476+0000 mon.vm05 (mon.0) 728 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:44.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:44 vm09 bash[21099]: cluster 2026-03-10T07:20:42.659668+0000 mgr.vm05.wnsmpp (mgr.14195) 160 : cluster [DBG] pgmap v106: 65 pgs: 65 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s wr, 1 op/s 2026-03-10T07:20:44.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:44 vm09 bash[21099]: cluster 2026-03-10T07:20:42.659668+0000 mgr.vm05.wnsmpp (mgr.14195) 160 : cluster [DBG] pgmap v106: 65 pgs: 65 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s wr, 1 op/s 2026-03-10T07:20:44.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:44 vm09 bash[21099]: audit 2026-03-10T07:20:43.169550+0000 mgr.vm05.wnsmpp (mgr.14195) 161 : audit [DBG] from='client.14486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:44.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:44 vm09 bash[21099]: audit 2026-03-10T07:20:43.169550+0000 mgr.vm05.wnsmpp (mgr.14195) 161 : audit [DBG] from='client.14486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:44.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:44 vm09 bash[21099]: cluster 2026-03-10T07:20:43.397944+0000 mon.vm05 (mon.0) 725 : cluster [DBG] mds.? [v2:192.168.123.109:6832/872230326,v1:192.168.123.109:6833/872230326] up:standby 2026-03-10T07:20:44.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:44 vm09 bash[21099]: cluster 2026-03-10T07:20:43.397944+0000 mon.vm05 (mon.0) 725 : cluster [DBG] mds.? [v2:192.168.123.109:6832/872230326,v1:192.168.123.109:6833/872230326] up:standby 2026-03-10T07:20:44.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:44 vm09 bash[21099]: cluster 2026-03-10T07:20:43.398052+0000 mon.vm05 (mon.0) 726 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:active} 1 up:standby 2026-03-10T07:20:44.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:44 vm09 bash[21099]: cluster 2026-03-10T07:20:43.398052+0000 mon.vm05 (mon.0) 726 : cluster [DBG] fsmap foofs:1 {0=foofs.vm05.oxovsp=up:active} 1 up:standby 2026-03-10T07:20:44.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:44 vm09 bash[21099]: audit 2026-03-10T07:20:44.320560+0000 mon.vm05 (mon.0) 727 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:44.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:44 vm09 bash[21099]: audit 2026-03-10T07:20:44.320560+0000 mon.vm05 (mon.0) 727 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:44.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:44 vm09 bash[21099]: audit 2026-03-10T07:20:44.327476+0000 mon.vm05 (mon.0) 728 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:44.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:44 vm09 bash[21099]: audit 2026-03-10T07:20:44.327476+0000 mon.vm05 (mon.0) 728 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.573235+0000 mon.vm05 (mon.0) 729 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.573235+0000 mon.vm05 (mon.0) 729 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.578832+0000 mon.vm05 (mon.0) 730 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.578832+0000 mon.vm05 (mon.0) 730 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.580149+0000 mon.vm05 (mon.0) 731 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.580149+0000 mon.vm05 (mon.0) 731 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.580727+0000 mon.vm05 (mon.0) 732 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.580727+0000 mon.vm05 (mon.0) 732 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.585037+0000 mon.vm05 (mon.0) 733 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.585037+0000 mon.vm05 (mon.0) 733 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.586860+0000 mon.vm05 (mon.0) 734 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.586860+0000 mon.vm05 (mon.0) 734 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.593907+0000 mon.vm05 (mon.0) 735 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.593907+0000 mon.vm05 (mon.0) 735 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.594697+0000 mgr.vm05.wnsmpp (mgr.14195) 162 : cephadm [INF] Creating key for client.nfs.foo.0.0.vm05.adjxhw 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.594697+0000 mgr.vm05.wnsmpp (mgr.14195) 162 : cephadm [INF] Creating key for client.nfs.foo.0.0.vm05.adjxhw 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.594870+0000 mon.vm05 (mon.0) 736 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm05.adjxhw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.594870+0000 mon.vm05 (mon.0) 736 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm05.adjxhw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.597797+0000 mon.vm05 (mon.0) 737 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm05.adjxhw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.597797+0000 mon.vm05 (mon.0) 737 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm05.adjxhw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.600289+0000 mgr.vm05.wnsmpp (mgr.14195) 163 : cephadm [INF] Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.600289+0000 mgr.vm05.wnsmpp (mgr.14195) 163 : cephadm [INF] Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.600463+0000 mon.vm05 (mon.0) 738 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.600463+0000 mon.vm05 (mon.0) 738 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.602190+0000 mon.vm05 (mon.0) 739 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.602190+0000 mon.vm05 (mon.0) 739 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.605197+0000 mon.vm05 (mon.0) 740 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.605197+0000 mon.vm05 (mon.0) 740 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.636510+0000 mgr.vm05.wnsmpp (mgr.14195) 164 : cephadm [WRN] ganesha-rados-grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: Can't connect to cluster: -1 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.636510+0000 mgr.vm05.wnsmpp (mgr.14195) 164 : cephadm [WRN] ganesha-rados-grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: Can't connect to cluster: -1 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.636814+0000 mon.vm05 (mon.0) 741 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.636814+0000 mon.vm05 (mon.0) 741 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.639225+0000 mon.vm05 (mon.0) 742 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.639225+0000 mon.vm05 (mon.0) 742 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.642047+0000 mgr.vm05.wnsmpp (mgr.14195) 165 : cephadm [ERR] Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: Can't connect to cluster: -1 2026-03-10T07:20:45.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.642047+0000 mgr.vm05.wnsmpp (mgr.14195) 165 : cephadm [ERR] Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: Can't connect to cluster: -1 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.643917+0000 mgr.vm05.wnsmpp (mgr.14195) 166 : cephadm [INF] Creating key for client.nfs.foo.1.0.vm09.pgwkva 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.643917+0000 mgr.vm05.wnsmpp (mgr.14195) 166 : cephadm [INF] Creating key for client.nfs.foo.1.0.vm09.pgwkva 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.644096+0000 mon.vm05 (mon.0) 743 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.vm09.pgwkva", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.644096+0000 mon.vm05 (mon.0) 743 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.vm09.pgwkva", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.646038+0000 mon.vm05 (mon.0) 744 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.vm09.pgwkva", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.646038+0000 mon.vm05 (mon.0) 744 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.vm09.pgwkva", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.648667+0000 mgr.vm05.wnsmpp (mgr.14195) 167 : cephadm [INF] Ensuring nfs.foo.1 is in the ganesha grace table 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.648667+0000 mgr.vm05.wnsmpp (mgr.14195) 167 : cephadm [INF] Ensuring nfs.foo.1 is in the ganesha grace table 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.648801+0000 mon.vm05 (mon.0) 745 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.648801+0000 mon.vm05 (mon.0) 745 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.650448+0000 mon.vm05 (mon.0) 746 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.650448+0000 mon.vm05 (mon.0) 746 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.653084+0000 mon.vm05 (mon.0) 747 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.653084+0000 mon.vm05 (mon.0) 747 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cluster 2026-03-10T07:20:44.660414+0000 mgr.vm05.wnsmpp (mgr.14195) 168 : cluster [DBG] pgmap v107: 65 pgs: 65 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s wr, 4 op/s 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cluster 2026-03-10T07:20:44.660414+0000 mgr.vm05.wnsmpp (mgr.14195) 168 : cluster [DBG] pgmap v107: 65 pgs: 65 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s wr, 4 op/s 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.686311+0000 mgr.vm05.wnsmpp (mgr.14195) 169 : cephadm [WRN] ganesha-rados-grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: Can't connect to cluster: -1 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.686311+0000 mgr.vm05.wnsmpp (mgr.14195) 169 : cephadm [WRN] ganesha-rados-grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: Can't connect to cluster: -1 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.686497+0000 mon.vm05 (mon.0) 748 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.686497+0000 mon.vm05 (mon.0) 748 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.688793+0000 mon.vm05 (mon.0) 749 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.688793+0000 mon.vm05 (mon.0) 749 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.691427+0000 mgr.vm05.wnsmpp (mgr.14195) 170 : cephadm [ERR] Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: Can't connect to cluster: -1 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cephadm 2026-03-10T07:20:44.691427+0000 mgr.vm05.wnsmpp (mgr.14195) 170 : cephadm [ERR] Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: Can't connect to cluster: -1 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cluster 2026-03-10T07:20:44.692493+0000 mgr.vm05.wnsmpp (mgr.14195) 171 : cluster [DBG] pgmap v108: 65 pgs: 65 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s wr, 4 op/s 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: cluster 2026-03-10T07:20:44.692493+0000 mgr.vm05.wnsmpp (mgr.14195) 171 : cluster [DBG] pgmap v108: 65 pgs: 65 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s wr, 4 op/s 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.707186+0000 mon.vm05 (mon.0) 750 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:20:45.925 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:45 vm09 bash[21099]: audit 2026-03-10T07:20:44.707186+0000 mon.vm05 (mon.0) 750 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.573235+0000 mon.vm05 (mon.0) 729 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.573235+0000 mon.vm05 (mon.0) 729 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.578832+0000 mon.vm05 (mon.0) 730 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.578832+0000 mon.vm05 (mon.0) 730 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.580149+0000 mon.vm05 (mon.0) 731 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.580149+0000 mon.vm05 (mon.0) 731 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.580727+0000 mon.vm05 (mon.0) 732 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.580727+0000 mon.vm05 (mon.0) 732 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.585037+0000 mon.vm05 (mon.0) 733 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.585037+0000 mon.vm05 (mon.0) 733 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.586860+0000 mon.vm05 (mon.0) 734 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.586860+0000 mon.vm05 (mon.0) 734 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.593907+0000 mon.vm05 (mon.0) 735 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.593907+0000 mon.vm05 (mon.0) 735 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.594697+0000 mgr.vm05.wnsmpp (mgr.14195) 162 : cephadm [INF] Creating key for client.nfs.foo.0.0.vm05.adjxhw 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.594697+0000 mgr.vm05.wnsmpp (mgr.14195) 162 : cephadm [INF] Creating key for client.nfs.foo.0.0.vm05.adjxhw 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.594870+0000 mon.vm05 (mon.0) 736 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm05.adjxhw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.594870+0000 mon.vm05 (mon.0) 736 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm05.adjxhw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.597797+0000 mon.vm05 (mon.0) 737 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm05.adjxhw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.597797+0000 mon.vm05 (mon.0) 737 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm05.adjxhw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.600289+0000 mgr.vm05.wnsmpp (mgr.14195) 163 : cephadm [INF] Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.600289+0000 mgr.vm05.wnsmpp (mgr.14195) 163 : cephadm [INF] Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.600463+0000 mon.vm05 (mon.0) 738 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.600463+0000 mon.vm05 (mon.0) 738 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.602190+0000 mon.vm05 (mon.0) 739 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.602190+0000 mon.vm05 (mon.0) 739 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.605197+0000 mon.vm05 (mon.0) 740 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.605197+0000 mon.vm05 (mon.0) 740 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.636510+0000 mgr.vm05.wnsmpp (mgr.14195) 164 : cephadm [WRN] ganesha-rados-grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: Can't connect to cluster: -1 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.636510+0000 mgr.vm05.wnsmpp (mgr.14195) 164 : cephadm [WRN] ganesha-rados-grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: Can't connect to cluster: -1 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.636814+0000 mon.vm05 (mon.0) 741 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.636814+0000 mon.vm05 (mon.0) 741 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.639225+0000 mon.vm05 (mon.0) 742 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.639225+0000 mon.vm05 (mon.0) 742 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.642047+0000 mgr.vm05.wnsmpp (mgr.14195) 165 : cephadm [ERR] Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: Can't connect to cluster: -1 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.642047+0000 mgr.vm05.wnsmpp (mgr.14195) 165 : cephadm [ERR] Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: Can't connect to cluster: -1 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.643917+0000 mgr.vm05.wnsmpp (mgr.14195) 166 : cephadm [INF] Creating key for client.nfs.foo.1.0.vm09.pgwkva 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.643917+0000 mgr.vm05.wnsmpp (mgr.14195) 166 : cephadm [INF] Creating key for client.nfs.foo.1.0.vm09.pgwkva 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.644096+0000 mon.vm05 (mon.0) 743 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.vm09.pgwkva", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.644096+0000 mon.vm05 (mon.0) 743 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.vm09.pgwkva", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.646038+0000 mon.vm05 (mon.0) 744 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.vm09.pgwkva", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.646038+0000 mon.vm05 (mon.0) 744 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.vm09.pgwkva", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.648667+0000 mgr.vm05.wnsmpp (mgr.14195) 167 : cephadm [INF] Ensuring nfs.foo.1 is in the ganesha grace table 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.648667+0000 mgr.vm05.wnsmpp (mgr.14195) 167 : cephadm [INF] Ensuring nfs.foo.1 is in the ganesha grace table 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.648801+0000 mon.vm05 (mon.0) 745 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.648801+0000 mon.vm05 (mon.0) 745 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.650448+0000 mon.vm05 (mon.0) 746 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.650448+0000 mon.vm05 (mon.0) 746 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.653084+0000 mon.vm05 (mon.0) 747 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.653084+0000 mon.vm05 (mon.0) 747 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:20:45.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cluster 2026-03-10T07:20:44.660414+0000 mgr.vm05.wnsmpp (mgr.14195) 168 : cluster [DBG] pgmap v107: 65 pgs: 65 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s wr, 4 op/s 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cluster 2026-03-10T07:20:44.660414+0000 mgr.vm05.wnsmpp (mgr.14195) 168 : cluster [DBG] pgmap v107: 65 pgs: 65 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s wr, 4 op/s 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.686311+0000 mgr.vm05.wnsmpp (mgr.14195) 169 : cephadm [WRN] ganesha-rados-grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: Can't connect to cluster: -1 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.686311+0000 mgr.vm05.wnsmpp (mgr.14195) 169 : cephadm [WRN] ganesha-rados-grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: Can't connect to cluster: -1 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.686497+0000 mon.vm05 (mon.0) 748 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.686497+0000 mon.vm05 (mon.0) 748 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.688793+0000 mon.vm05 (mon.0) 749 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.688793+0000 mon.vm05 (mon.0) 749 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.691427+0000 mgr.vm05.wnsmpp (mgr.14195) 170 : cephadm [ERR] Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: Can't connect to cluster: -1 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cephadm 2026-03-10T07:20:44.691427+0000 mgr.vm05.wnsmpp (mgr.14195) 170 : cephadm [ERR] Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: Can't connect to cluster: -1 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cluster 2026-03-10T07:20:44.692493+0000 mgr.vm05.wnsmpp (mgr.14195) 171 : cluster [DBG] pgmap v108: 65 pgs: 65 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s wr, 4 op/s 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: cluster 2026-03-10T07:20:44.692493+0000 mgr.vm05.wnsmpp (mgr.14195) 171 : cluster [DBG] pgmap v108: 65 pgs: 65 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s wr, 4 op/s 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.707186+0000 mon.vm05 (mon.0) 750 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:20:45.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:45 vm05 bash[17520]: audit 2026-03-10T07:20:44.707186+0000 mon.vm05 (mon.0) 750 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T07:20:46.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:46 vm05 bash[17520]: cluster 2026-03-10T07:20:45.650456+0000 mon.vm05 (mon.0) 751 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL) 2026-03-10T07:20:46.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:46 vm05 bash[17520]: cluster 2026-03-10T07:20:45.650456+0000 mon.vm05 (mon.0) 751 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL) 2026-03-10T07:20:46.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:46 vm05 bash[17520]: audit 2026-03-10T07:20:45.691956+0000 mon.vm05 (mon.0) 752 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:20:46.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:46 vm05 bash[17520]: audit 2026-03-10T07:20:45.691956+0000 mon.vm05 (mon.0) 752 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:20:46.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:46 vm05 bash[17520]: cluster 2026-03-10T07:20:45.695434+0000 mon.vm05 (mon.0) 753 : cluster [DBG] osdmap e29: 8 total, 8 up, 8 in 2026-03-10T07:20:46.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:46 vm05 bash[17520]: cluster 2026-03-10T07:20:45.695434+0000 mon.vm05 (mon.0) 753 : cluster [DBG] osdmap e29: 8 total, 8 up, 8 in 2026-03-10T07:20:46.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:46 vm05 bash[17520]: audit 2026-03-10T07:20:45.704925+0000 mon.vm05 (mon.0) 754 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch 2026-03-10T07:20:46.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:46 vm05 bash[17520]: audit 2026-03-10T07:20:45.704925+0000 mon.vm05 (mon.0) 754 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch 2026-03-10T07:20:46.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:46 vm09 bash[21099]: cluster 2026-03-10T07:20:45.650456+0000 mon.vm05 (mon.0) 751 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL) 2026-03-10T07:20:46.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:46 vm09 bash[21099]: cluster 2026-03-10T07:20:45.650456+0000 mon.vm05 (mon.0) 751 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL) 2026-03-10T07:20:46.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:46 vm09 bash[21099]: audit 2026-03-10T07:20:45.691956+0000 mon.vm05 (mon.0) 752 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:20:46.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:46 vm09 bash[21099]: audit 2026-03-10T07:20:45.691956+0000 mon.vm05 (mon.0) 752 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished 2026-03-10T07:20:46.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:46 vm09 bash[21099]: cluster 2026-03-10T07:20:45.695434+0000 mon.vm05 (mon.0) 753 : cluster [DBG] osdmap e29: 8 total, 8 up, 8 in 2026-03-10T07:20:46.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:46 vm09 bash[21099]: cluster 2026-03-10T07:20:45.695434+0000 mon.vm05 (mon.0) 753 : cluster [DBG] osdmap e29: 8 total, 8 up, 8 in 2026-03-10T07:20:46.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:46 vm09 bash[21099]: audit 2026-03-10T07:20:45.704925+0000 mon.vm05 (mon.0) 754 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch 2026-03-10T07:20:46.924 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:46 vm09 bash[21099]: audit 2026-03-10T07:20:45.704925+0000 mon.vm05 (mon.0) 754 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch 2026-03-10T07:20:47.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: cluster 2026-03-10T07:20:46.692855+0000 mgr.vm05.wnsmpp (mgr.14195) 172 : cluster [DBG] pgmap v110: 97 pgs: 32 unknown, 65 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s wr, 5 op/s 2026-03-10T07:20:47.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: cluster 2026-03-10T07:20:46.692855+0000 mgr.vm05.wnsmpp (mgr.14195) 172 : cluster [DBG] pgmap v110: 97 pgs: 32 unknown, 65 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s wr, 5 op/s 2026-03-10T07:20:47.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: audit 2026-03-10T07:20:46.695372+0000 mon.vm05 (mon.0) 755 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished 2026-03-10T07:20:47.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: audit 2026-03-10T07:20:46.695372+0000 mon.vm05 (mon.0) 755 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished 2026-03-10T07:20:47.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: cluster 2026-03-10T07:20:46.701920+0000 mon.vm05 (mon.0) 756 : cluster [DBG] osdmap e30: 8 total, 8 up, 8 in 2026-03-10T07:20:47.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: cluster 2026-03-10T07:20:46.701920+0000 mon.vm05 (mon.0) 756 : cluster [DBG] osdmap e30: 8 total, 8 up, 8 in 2026-03-10T07:20:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: audit 2026-03-10T07:20:46.730516+0000 mon.vm05 (mon.0) 757 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: audit 2026-03-10T07:20:46.730516+0000 mon.vm05 (mon.0) 757 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: audit 2026-03-10T07:20:46.738423+0000 mon.vm05 (mon.0) 758 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: audit 2026-03-10T07:20:46.738423+0000 mon.vm05 (mon.0) 758 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: cephadm 2026-03-10T07:20:46.743458+0000 mgr.vm05.wnsmpp (mgr.14195) 173 : cephadm [INF] Deploying daemon haproxy.nfs.foo.vm09.etnbzh on vm09 2026-03-10T07:20:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: cephadm 2026-03-10T07:20:46.743458+0000 mgr.vm05.wnsmpp (mgr.14195) 173 : cephadm [INF] Deploying daemon haproxy.nfs.foo.vm09.etnbzh on vm09 2026-03-10T07:20:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: cluster 2026-03-10T07:20:47.577861+0000 mon.vm05 (mon.0) 759 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:20:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:47 vm05 bash[17520]: cluster 2026-03-10T07:20:47.577861+0000 mon.vm05 (mon.0) 759 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:20:47.968 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: cluster 2026-03-10T07:20:46.692855+0000 mgr.vm05.wnsmpp (mgr.14195) 172 : cluster [DBG] pgmap v110: 97 pgs: 32 unknown, 65 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s wr, 5 op/s 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: cluster 2026-03-10T07:20:46.692855+0000 mgr.vm05.wnsmpp (mgr.14195) 172 : cluster [DBG] pgmap v110: 97 pgs: 32 unknown, 65 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s wr, 5 op/s 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: audit 2026-03-10T07:20:46.695372+0000 mon.vm05 (mon.0) 755 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: audit 2026-03-10T07:20:46.695372+0000 mon.vm05 (mon.0) 755 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: cluster 2026-03-10T07:20:46.701920+0000 mon.vm05 (mon.0) 756 : cluster [DBG] osdmap e30: 8 total, 8 up, 8 in 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: cluster 2026-03-10T07:20:46.701920+0000 mon.vm05 (mon.0) 756 : cluster [DBG] osdmap e30: 8 total, 8 up, 8 in 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: audit 2026-03-10T07:20:46.730516+0000 mon.vm05 (mon.0) 757 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: audit 2026-03-10T07:20:46.730516+0000 mon.vm05 (mon.0) 757 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: audit 2026-03-10T07:20:46.738423+0000 mon.vm05 (mon.0) 758 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: audit 2026-03-10T07:20:46.738423+0000 mon.vm05 (mon.0) 758 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: cephadm 2026-03-10T07:20:46.743458+0000 mgr.vm05.wnsmpp (mgr.14195) 173 : cephadm [INF] Deploying daemon haproxy.nfs.foo.vm09.etnbzh on vm09 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: cephadm 2026-03-10T07:20:46.743458+0000 mgr.vm05.wnsmpp (mgr.14195) 173 : cephadm [INF] Deploying daemon haproxy.nfs.foo.vm09.etnbzh on vm09 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: cluster 2026-03-10T07:20:47.577861+0000 mon.vm05 (mon.0) 759 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:20:48.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:47 vm09 bash[21099]: cluster 2026-03-10T07:20:47.577861+0000 mon.vm05 (mon.0) 759 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T07:20:48.247 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:20:48.247 INFO:teuthology.orchestra.run.vm05.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T07:16:57.250451Z", "last_refresh": "2026-03-10T07:20:44.566682Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:17:50.632506Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T07:16:55.873736Z", "last_refresh": "2026-03-10T07:20:44.314122Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:51.461998Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T07:16:55.599449Z", "last_refresh": "2026-03-10T07:20:44.314215Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T07:16:56.598903Z", "last_refresh": "2026-03-10T07:20:44.566838Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:20:38.187535Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9002, "virtual_ip": "12.12.1.105/22"}, "status": {"created": "2026-03-10T07:20:38.184040Z", "ports": [2049, 9002], "running": 0, "size": 4, "virtual_ip": "12.12.1.105/22"}}, {"events": ["2026-03-10T07:20:39.165253Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T07:20:33.161016Z", "last_refresh": "2026-03-10T07:20:44.314280Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.987383Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T07:16:55.321760Z", "last_refresh": "2026-03-10T07:20:44.314244Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:54.139000Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm05:192.168.123.105=vm05", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T07:17:37.480725Z", "last_refresh": "2026-03-10T07:20:44.314185Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:20:46.730797Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T07:20:44.641942Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\"", "2026-03-10T07:20:44.691372Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 2}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 12049}, "status": {"created": "2026-03-10T07:20:38.179287Z", "ports": [12049], "running": 0, "size": 2}}, {"events": ["2026-03-10T07:17:52.164679Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T07:16:56.934008Z", "last_refresh": "2026-03-10T07:20:44.314350Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:18:12.645291Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T07:18:12.640395Z", "last_refresh": "2026-03-10T07:20:44.314028Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T07:17:54.141548Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T07:16:56.184682Z", "last_refresh": "2026-03-10T07:20:44.566734Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T07:20:48.301 INFO:tasks.cephadm:nfs.foo has 0/2 2026-03-10T07:20:48.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:48 vm05 bash[17520]: cluster 2026-03-10T07:20:47.708151+0000 mon.vm05 (mon.0) 760 : cluster [DBG] osdmap e31: 8 total, 8 up, 8 in 2026-03-10T07:20:48.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:48 vm05 bash[17520]: cluster 2026-03-10T07:20:47.708151+0000 mon.vm05 (mon.0) 760 : cluster [DBG] osdmap e31: 8 total, 8 up, 8 in 2026-03-10T07:20:48.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:48 vm05 bash[17520]: audit 2026-03-10T07:20:48.245714+0000 mgr.vm05.wnsmpp (mgr.14195) 174 : audit [DBG] from='client.14498 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:48.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:48 vm05 bash[17520]: audit 2026-03-10T07:20:48.245714+0000 mgr.vm05.wnsmpp (mgr.14195) 174 : audit [DBG] from='client.14498 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:49.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:48 vm09 bash[21099]: cluster 2026-03-10T07:20:47.708151+0000 mon.vm05 (mon.0) 760 : cluster [DBG] osdmap e31: 8 total, 8 up, 8 in 2026-03-10T07:20:49.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:48 vm09 bash[21099]: cluster 2026-03-10T07:20:47.708151+0000 mon.vm05 (mon.0) 760 : cluster [DBG] osdmap e31: 8 total, 8 up, 8 in 2026-03-10T07:20:49.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:48 vm09 bash[21099]: audit 2026-03-10T07:20:48.245714+0000 mgr.vm05.wnsmpp (mgr.14195) 174 : audit [DBG] from='client.14498 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:49.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:48 vm09 bash[21099]: audit 2026-03-10T07:20:48.245714+0000 mgr.vm05.wnsmpp (mgr.14195) 174 : audit [DBG] from='client.14498 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:49.301 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch ls -f json 2026-03-10T07:20:50.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:49 vm09 bash[21099]: cluster 2026-03-10T07:20:48.693295+0000 mgr.vm05.wnsmpp (mgr.14195) 175 : cluster [DBG] pgmap v113: 97 pgs: 19 unknown, 78 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:50.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:49 vm09 bash[21099]: cluster 2026-03-10T07:20:48.693295+0000 mgr.vm05.wnsmpp (mgr.14195) 175 : cluster [DBG] pgmap v113: 97 pgs: 19 unknown, 78 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:50.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:49 vm09 bash[21099]: cluster 2026-03-10T07:20:48.704391+0000 mon.vm05 (mon.0) 761 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:20:50.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:49 vm09 bash[21099]: cluster 2026-03-10T07:20:48.704391+0000 mon.vm05 (mon.0) 761 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:20:50.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:49 vm09 bash[21099]: audit 2026-03-10T07:20:49.544170+0000 mon.vm05 (mon.0) 762 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:50.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:49 vm09 bash[21099]: audit 2026-03-10T07:20:49.544170+0000 mon.vm05 (mon.0) 762 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:50.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:49 vm05 bash[17520]: cluster 2026-03-10T07:20:48.693295+0000 mgr.vm05.wnsmpp (mgr.14195) 175 : cluster [DBG] pgmap v113: 97 pgs: 19 unknown, 78 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:50.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:49 vm05 bash[17520]: cluster 2026-03-10T07:20:48.693295+0000 mgr.vm05.wnsmpp (mgr.14195) 175 : cluster [DBG] pgmap v113: 97 pgs: 19 unknown, 78 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:50.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:49 vm05 bash[17520]: cluster 2026-03-10T07:20:48.704391+0000 mon.vm05 (mon.0) 761 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:20:50.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:49 vm05 bash[17520]: cluster 2026-03-10T07:20:48.704391+0000 mon.vm05 (mon.0) 761 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T07:20:50.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:49 vm05 bash[17520]: audit 2026-03-10T07:20:49.544170+0000 mon.vm05 (mon.0) 762 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:50.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:49 vm05 bash[17520]: audit 2026-03-10T07:20:49.544170+0000 mon.vm05 (mon.0) 762 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:51 vm05 bash[17520]: cluster 2026-03-10T07:20:50.693753+0000 mgr.vm05.wnsmpp (mgr.14195) 176 : cluster [DBG] pgmap v114: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:51.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:51 vm05 bash[17520]: cluster 2026-03-10T07:20:50.693753+0000 mgr.vm05.wnsmpp (mgr.14195) 176 : cluster [DBG] pgmap v114: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:52.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:51 vm09 bash[21099]: cluster 2026-03-10T07:20:50.693753+0000 mgr.vm05.wnsmpp (mgr.14195) 176 : cluster [DBG] pgmap v114: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:52.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:51 vm09 bash[21099]: cluster 2026-03-10T07:20:50.693753+0000 mgr.vm05.wnsmpp (mgr.14195) 176 : cluster [DBG] pgmap v114: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:53.941 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:20:54.121 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:53 vm09 bash[21099]: cluster 2026-03-10T07:20:52.694158+0000 mgr.vm05.wnsmpp (mgr.14195) 177 : cluster [DBG] pgmap v115: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:54.121 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:53 vm09 bash[21099]: cluster 2026-03-10T07:20:52.694158+0000 mgr.vm05.wnsmpp (mgr.14195) 177 : cluster [DBG] pgmap v115: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:53 vm05 bash[17520]: cluster 2026-03-10T07:20:52.694158+0000 mgr.vm05.wnsmpp (mgr.14195) 177 : cluster [DBG] pgmap v115: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:54.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:53 vm05 bash[17520]: cluster 2026-03-10T07:20:52.694158+0000 mgr.vm05.wnsmpp (mgr.14195) 177 : cluster [DBG] pgmap v115: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:54.233 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:20:54.233 INFO:teuthology.orchestra.run.vm05.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T07:16:57.250451Z", "last_refresh": "2026-03-10T07:20:44.566682Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:17:50.632506Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T07:16:55.873736Z", "last_refresh": "2026-03-10T07:20:44.314122Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:51.461998Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T07:16:55.599449Z", "last_refresh": "2026-03-10T07:20:44.314215Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T07:16:56.598903Z", "last_refresh": "2026-03-10T07:20:44.566838Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:20:38.187535Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9002, "virtual_ip": "12.12.1.105/22"}, "status": {"created": "2026-03-10T07:20:38.184040Z", "ports": [2049, 9002], "running": 0, "size": 4, "virtual_ip": "12.12.1.105/22"}}, {"events": ["2026-03-10T07:20:39.165253Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T07:20:33.161016Z", "last_refresh": "2026-03-10T07:20:44.314280Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.987383Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T07:16:55.321760Z", "last_refresh": "2026-03-10T07:20:44.314244Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:54.139000Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm05:192.168.123.105=vm05", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T07:17:37.480725Z", "last_refresh": "2026-03-10T07:20:44.314185Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:20:46.730797Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T07:20:44.641942Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\"", "2026-03-10T07:20:44.691372Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 2}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 12049}, "status": {"created": "2026-03-10T07:20:38.179287Z", "ports": [12049], "running": 0, "size": 2}}, {"events": ["2026-03-10T07:17:52.164679Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T07:16:56.934008Z", "last_refresh": "2026-03-10T07:20:44.314350Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:18:12.645291Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T07:18:12.640395Z", "last_refresh": "2026-03-10T07:20:44.314028Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T07:17:54.141548Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T07:16:56.184682Z", "last_refresh": "2026-03-10T07:20:44.566734Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T07:20:54.285 INFO:tasks.cephadm:nfs.foo has 0/2 2026-03-10T07:20:54.419 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:54 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:54.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:54 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:55.019 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:54 vm09 bash[21099]: audit 2026-03-10T07:20:54.232570+0000 mgr.vm05.wnsmpp (mgr.14195) 178 : audit [DBG] from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:55.019 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:54 vm09 bash[21099]: audit 2026-03-10T07:20:54.232570+0000 mgr.vm05.wnsmpp (mgr.14195) 178 : audit [DBG] from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:55.019 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:54 vm09 bash[21099]: audit 2026-03-10T07:20:54.652085+0000 mon.vm05 (mon.0) 763 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:55.019 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:54 vm09 bash[21099]: audit 2026-03-10T07:20:54.652085+0000 mon.vm05 (mon.0) 763 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:55.019 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:54 vm09 bash[21099]: audit 2026-03-10T07:20:54.657639+0000 mon.vm05 (mon.0) 764 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:55.020 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:54 vm09 bash[21099]: audit 2026-03-10T07:20:54.657639+0000 mon.vm05 (mon.0) 764 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:55.020 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:54 vm09 bash[21099]: audit 2026-03-10T07:20:54.663681+0000 mon.vm05 (mon.0) 765 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:55.020 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:54 vm09 bash[21099]: audit 2026-03-10T07:20:54.663681+0000 mon.vm05 (mon.0) 765 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:55.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:54 vm05 bash[17520]: audit 2026-03-10T07:20:54.232570+0000 mgr.vm05.wnsmpp (mgr.14195) 178 : audit [DBG] from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:55.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:54 vm05 bash[17520]: audit 2026-03-10T07:20:54.232570+0000 mgr.vm05.wnsmpp (mgr.14195) 178 : audit [DBG] from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:20:55.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:54 vm05 bash[17520]: audit 2026-03-10T07:20:54.652085+0000 mon.vm05 (mon.0) 763 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:55.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:54 vm05 bash[17520]: audit 2026-03-10T07:20:54.652085+0000 mon.vm05 (mon.0) 763 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:55.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:54 vm05 bash[17520]: audit 2026-03-10T07:20:54.657639+0000 mon.vm05 (mon.0) 764 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:55.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:54 vm05 bash[17520]: audit 2026-03-10T07:20:54.657639+0000 mon.vm05 (mon.0) 764 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:55.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:54 vm05 bash[17520]: audit 2026-03-10T07:20:54.663681+0000 mon.vm05 (mon.0) 765 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:55.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:54 vm05 bash[17520]: audit 2026-03-10T07:20:54.663681+0000 mon.vm05 (mon.0) 765 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:55.286 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch ls -f json 2026-03-10T07:20:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:55 vm09 bash[21099]: cephadm 2026-03-10T07:20:54.665171+0000 mgr.vm05.wnsmpp (mgr.14195) 179 : cephadm [INF] Deploying daemon haproxy.nfs.foo.vm05.yhprte on vm05 2026-03-10T07:20:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:55 vm09 bash[21099]: cephadm 2026-03-10T07:20:54.665171+0000 mgr.vm05.wnsmpp (mgr.14195) 179 : cephadm [INF] Deploying daemon haproxy.nfs.foo.vm05.yhprte on vm05 2026-03-10T07:20:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:55 vm09 bash[21099]: cluster 2026-03-10T07:20:54.694583+0000 mgr.vm05.wnsmpp (mgr.14195) 180 : cluster [DBG] pgmap v116: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:56.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:55 vm09 bash[21099]: cluster 2026-03-10T07:20:54.694583+0000 mgr.vm05.wnsmpp (mgr.14195) 180 : cluster [DBG] pgmap v116: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:56.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:55 vm05 bash[17520]: cephadm 2026-03-10T07:20:54.665171+0000 mgr.vm05.wnsmpp (mgr.14195) 179 : cephadm [INF] Deploying daemon haproxy.nfs.foo.vm05.yhprte on vm05 2026-03-10T07:20:56.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:55 vm05 bash[17520]: cephadm 2026-03-10T07:20:54.665171+0000 mgr.vm05.wnsmpp (mgr.14195) 179 : cephadm [INF] Deploying daemon haproxy.nfs.foo.vm05.yhprte on vm05 2026-03-10T07:20:56.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:55 vm05 bash[17520]: cluster 2026-03-10T07:20:54.694583+0000 mgr.vm05.wnsmpp (mgr.14195) 180 : cluster [DBG] pgmap v116: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:56.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:55 vm05 bash[17520]: cluster 2026-03-10T07:20:54.694583+0000 mgr.vm05.wnsmpp (mgr.14195) 180 : cluster [DBG] pgmap v116: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:58.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:57 vm09 bash[21099]: cluster 2026-03-10T07:20:56.694935+0000 mgr.vm05.wnsmpp (mgr.14195) 181 : cluster [DBG] pgmap v117: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:58.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:57 vm09 bash[21099]: cluster 2026-03-10T07:20:56.694935+0000 mgr.vm05.wnsmpp (mgr.14195) 181 : cluster [DBG] pgmap v117: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:58.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:57 vm09 bash[21099]: audit 2026-03-10T07:20:57.682283+0000 mon.vm05 (mon.0) 766 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:58.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:57 vm09 bash[21099]: audit 2026-03-10T07:20:57.682283+0000 mon.vm05 (mon.0) 766 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:58.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:57 vm09 bash[21099]: audit 2026-03-10T07:20:57.682945+0000 mon.vm05 (mon.0) 767 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:58.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:57 vm09 bash[21099]: audit 2026-03-10T07:20:57.682945+0000 mon.vm05 (mon.0) 767 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:58.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:57 vm05 bash[17520]: cluster 2026-03-10T07:20:56.694935+0000 mgr.vm05.wnsmpp (mgr.14195) 181 : cluster [DBG] pgmap v117: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:57 vm05 bash[17520]: cluster 2026-03-10T07:20:56.694935+0000 mgr.vm05.wnsmpp (mgr.14195) 181 : cluster [DBG] pgmap v117: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:20:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:57 vm05 bash[17520]: audit 2026-03-10T07:20:57.682283+0000 mon.vm05 (mon.0) 766 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:57 vm05 bash[17520]: audit 2026-03-10T07:20:57.682283+0000 mon.vm05 (mon.0) 766 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:20:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:57 vm05 bash[17520]: audit 2026-03-10T07:20:57.682945+0000 mon.vm05 (mon.0) 767 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:58.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:57 vm05 bash[17520]: audit 2026-03-10T07:20:57.682945+0000 mon.vm05 (mon.0) 767 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:20:58.684 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:58 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:20:58.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:58 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:00.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: cluster 2026-03-10T07:20:58.695342+0000 mgr.vm05.wnsmpp (mgr.14195) 182 : cluster [DBG] pgmap v118: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:00.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: cluster 2026-03-10T07:20:58.695342+0000 mgr.vm05.wnsmpp (mgr.14195) 182 : cluster [DBG] pgmap v118: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:00.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: audit 2026-03-10T07:20:58.958818+0000 mon.vm05 (mon.0) 768 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: audit 2026-03-10T07:20:58.958818+0000 mon.vm05 (mon.0) 768 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: audit 2026-03-10T07:20:58.963818+0000 mon.vm05 (mon.0) 769 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: audit 2026-03-10T07:20:58.963818+0000 mon.vm05 (mon.0) 769 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: audit 2026-03-10T07:20:58.967166+0000 mon.vm05 (mon.0) 770 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: audit 2026-03-10T07:20:58.967166+0000 mon.vm05 (mon.0) 770 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: audit 2026-03-10T07:20:58.970088+0000 mon.vm05 (mon.0) 771 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: audit 2026-03-10T07:20:58.970088+0000 mon.vm05 (mon.0) 771 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: cephadm 2026-03-10T07:20:58.970486+0000 mgr.vm05.wnsmpp (mgr.14195) 183 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm05 interface ens3 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: cephadm 2026-03-10T07:20:58.970486+0000 mgr.vm05.wnsmpp (mgr.14195) 183 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm05 interface ens3 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: cephadm 2026-03-10T07:20:58.970526+0000 mgr.vm05.wnsmpp (mgr.14195) 184 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm09 interface ens3 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: cephadm 2026-03-10T07:20:58.970526+0000 mgr.vm05.wnsmpp (mgr.14195) 184 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm09 interface ens3 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: cephadm 2026-03-10T07:20:58.972996+0000 mgr.vm05.wnsmpp (mgr.14195) 185 : cephadm [INF] Deploying daemon keepalived.nfs.foo.vm05.zypjfy on vm05 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: cephadm 2026-03-10T07:20:58.972996+0000 mgr.vm05.wnsmpp (mgr.14195) 185 : cephadm [INF] Deploying daemon keepalived.nfs.foo.vm05.zypjfy on vm05 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: audit 2026-03-10T07:20:59.551344+0000 mon.vm05 (mon.0) 772 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:20:59 vm05 bash[17520]: audit 2026-03-10T07:20:59.551344+0000 mon.vm05 (mon.0) 772 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: cluster 2026-03-10T07:20:58.695342+0000 mgr.vm05.wnsmpp (mgr.14195) 182 : cluster [DBG] pgmap v118: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: cluster 2026-03-10T07:20:58.695342+0000 mgr.vm05.wnsmpp (mgr.14195) 182 : cluster [DBG] pgmap v118: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: audit 2026-03-10T07:20:58.958818+0000 mon.vm05 (mon.0) 768 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: audit 2026-03-10T07:20:58.958818+0000 mon.vm05 (mon.0) 768 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: audit 2026-03-10T07:20:58.963818+0000 mon.vm05 (mon.0) 769 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: audit 2026-03-10T07:20:58.963818+0000 mon.vm05 (mon.0) 769 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: audit 2026-03-10T07:20:58.967166+0000 mon.vm05 (mon.0) 770 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: audit 2026-03-10T07:20:58.967166+0000 mon.vm05 (mon.0) 770 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: audit 2026-03-10T07:20:58.970088+0000 mon.vm05 (mon.0) 771 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: audit 2026-03-10T07:20:58.970088+0000 mon.vm05 (mon.0) 771 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: cephadm 2026-03-10T07:20:58.970486+0000 mgr.vm05.wnsmpp (mgr.14195) 183 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm05 interface ens3 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: cephadm 2026-03-10T07:20:58.970486+0000 mgr.vm05.wnsmpp (mgr.14195) 183 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm05 interface ens3 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: cephadm 2026-03-10T07:20:58.970526+0000 mgr.vm05.wnsmpp (mgr.14195) 184 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm09 interface ens3 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: cephadm 2026-03-10T07:20:58.970526+0000 mgr.vm05.wnsmpp (mgr.14195) 184 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm09 interface ens3 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: cephadm 2026-03-10T07:20:58.972996+0000 mgr.vm05.wnsmpp (mgr.14195) 185 : cephadm [INF] Deploying daemon keepalived.nfs.foo.vm05.zypjfy on vm05 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: cephadm 2026-03-10T07:20:58.972996+0000 mgr.vm05.wnsmpp (mgr.14195) 185 : cephadm [INF] Deploying daemon keepalived.nfs.foo.vm05.zypjfy on vm05 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: audit 2026-03-10T07:20:59.551344+0000 mon.vm05 (mon.0) 772 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:20:59 vm09 bash[21099]: audit 2026-03-10T07:20:59.551344+0000 mon.vm05 (mon.0) 772 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:00.924 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:21:01.210 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:21:01.210 INFO:teuthology.orchestra.run.vm05.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T07:16:57.250451Z", "last_refresh": "2026-03-10T07:20:44.566682Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:17:50.632506Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T07:16:55.873736Z", "last_refresh": "2026-03-10T07:20:44.314122Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:51.461998Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T07:16:55.599449Z", "last_refresh": "2026-03-10T07:20:44.314215Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T07:16:56.598903Z", "last_refresh": "2026-03-10T07:20:44.566838Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:20:58.967390Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9002, "virtual_ip": "12.12.1.105/22"}, "status": {"created": "2026-03-10T07:20:38.184040Z", "ports": [2049, 9002], "running": 0, "size": 4, "virtual_ip": "12.12.1.105/22"}}, {"events": ["2026-03-10T07:20:39.165253Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T07:20:33.161016Z", "last_refresh": "2026-03-10T07:20:44.314280Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.987383Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T07:16:55.321760Z", "last_refresh": "2026-03-10T07:20:44.314244Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:54.139000Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm05:192.168.123.105=vm05", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T07:17:37.480725Z", "last_refresh": "2026-03-10T07:20:44.314185Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:20:46.730797Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T07:20:44.641942Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\"", "2026-03-10T07:20:44.691372Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 2}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 12049}, "status": {"created": "2026-03-10T07:20:38.179287Z", "ports": [12049], "running": 0, "size": 2}}, {"events": ["2026-03-10T07:17:52.164679Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T07:16:56.934008Z", "last_refresh": "2026-03-10T07:20:44.314350Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:18:12.645291Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T07:18:12.640395Z", "last_refresh": "2026-03-10T07:20:44.314028Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T07:17:54.141548Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T07:16:56.184682Z", "last_refresh": "2026-03-10T07:20:44.566734Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T07:21:01.279 INFO:tasks.cephadm:nfs.foo has 0/2 2026-03-10T07:21:02.281 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch ls -f json 2026-03-10T07:21:02.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:01 vm05 bash[17520]: cluster 2026-03-10T07:21:00.695768+0000 mgr.vm05.wnsmpp (mgr.14195) 186 : cluster [DBG] pgmap v119: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:02.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:01 vm05 bash[17520]: cluster 2026-03-10T07:21:00.695768+0000 mgr.vm05.wnsmpp (mgr.14195) 186 : cluster [DBG] pgmap v119: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:02.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:01 vm05 bash[17520]: audit 2026-03-10T07:21:01.208628+0000 mgr.vm05.wnsmpp (mgr.14195) 187 : audit [DBG] from='client.14506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:02.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:01 vm05 bash[17520]: audit 2026-03-10T07:21:01.208628+0000 mgr.vm05.wnsmpp (mgr.14195) 187 : audit [DBG] from='client.14506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:02.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:01 vm09 bash[21099]: cluster 2026-03-10T07:21:00.695768+0000 mgr.vm05.wnsmpp (mgr.14195) 186 : cluster [DBG] pgmap v119: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:02.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:01 vm09 bash[21099]: cluster 2026-03-10T07:21:00.695768+0000 mgr.vm05.wnsmpp (mgr.14195) 186 : cluster [DBG] pgmap v119: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:02.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:01 vm09 bash[21099]: audit 2026-03-10T07:21:01.208628+0000 mgr.vm05.wnsmpp (mgr.14195) 187 : audit [DBG] from='client.14506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:02.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:01 vm09 bash[21099]: audit 2026-03-10T07:21:01.208628+0000 mgr.vm05.wnsmpp (mgr.14195) 187 : audit [DBG] from='client.14506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:04.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:03 vm09 bash[21099]: cluster 2026-03-10T07:21:02.696177+0000 mgr.vm05.wnsmpp (mgr.14195) 188 : cluster [DBG] pgmap v120: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:04.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:03 vm09 bash[21099]: cluster 2026-03-10T07:21:02.696177+0000 mgr.vm05.wnsmpp (mgr.14195) 188 : cluster [DBG] pgmap v120: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:04.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:03 vm05 bash[17520]: cluster 2026-03-10T07:21:02.696177+0000 mgr.vm05.wnsmpp (mgr.14195) 188 : cluster [DBG] pgmap v120: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:04.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:03 vm05 bash[17520]: cluster 2026-03-10T07:21:02.696177+0000 mgr.vm05.wnsmpp (mgr.14195) 188 : cluster [DBG] pgmap v120: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:06.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:05 vm09 bash[21099]: cluster 2026-03-10T07:21:04.696576+0000 mgr.vm05.wnsmpp (mgr.14195) 189 : cluster [DBG] pgmap v121: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:06.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:05 vm09 bash[21099]: cluster 2026-03-10T07:21:04.696576+0000 mgr.vm05.wnsmpp (mgr.14195) 189 : cluster [DBG] pgmap v121: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:06.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:05 vm05 bash[17520]: cluster 2026-03-10T07:21:04.696576+0000 mgr.vm05.wnsmpp (mgr.14195) 189 : cluster [DBG] pgmap v121: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:06.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:05 vm05 bash[17520]: cluster 2026-03-10T07:21:04.696576+0000 mgr.vm05.wnsmpp (mgr.14195) 189 : cluster [DBG] pgmap v121: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:06.933 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:21:07.391 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:21:07.391 INFO:teuthology.orchestra.run.vm05.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T07:16:57.250451Z", "last_refresh": "2026-03-10T07:20:44.566682Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:17:50.632506Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T07:16:55.873736Z", "last_refresh": "2026-03-10T07:20:44.314122Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:51.461998Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T07:16:55.599449Z", "last_refresh": "2026-03-10T07:20:44.314215Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T07:16:56.598903Z", "last_refresh": "2026-03-10T07:20:44.566838Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:20:58.967390Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9002, "virtual_ip": "12.12.1.105/22"}, "status": {"created": "2026-03-10T07:20:38.184040Z", "ports": [2049, 9002], "running": 0, "size": 4, "virtual_ip": "12.12.1.105/22"}}, {"events": ["2026-03-10T07:20:39.165253Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T07:20:33.161016Z", "last_refresh": "2026-03-10T07:20:44.314280Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.987383Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T07:16:55.321760Z", "last_refresh": "2026-03-10T07:20:44.314244Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:54.139000Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm05:192.168.123.105=vm05", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T07:17:37.480725Z", "last_refresh": "2026-03-10T07:20:44.314185Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:20:46.730797Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T07:20:44.641942Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\"", "2026-03-10T07:20:44.691372Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 2}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 12049}, "status": {"created": "2026-03-10T07:20:38.179287Z", "ports": [12049], "running": 0, "size": 2}}, {"events": ["2026-03-10T07:17:52.164679Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T07:16:56.934008Z", "last_refresh": "2026-03-10T07:20:44.314350Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:18:12.645291Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T07:18:12.640395Z", "last_refresh": "2026-03-10T07:20:44.314028Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T07:17:54.141548Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T07:16:56.184682Z", "last_refresh": "2026-03-10T07:20:44.566734Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T07:21:07.444 INFO:tasks.cephadm:nfs.foo has 0/2 2026-03-10T07:21:08.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:08 vm05 bash[17520]: cluster 2026-03-10T07:21:06.696929+0000 mgr.vm05.wnsmpp (mgr.14195) 190 : cluster [DBG] pgmap v122: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:08.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:08 vm05 bash[17520]: cluster 2026-03-10T07:21:06.696929+0000 mgr.vm05.wnsmpp (mgr.14195) 190 : cluster [DBG] pgmap v122: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:08.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:08 vm05 bash[17520]: audit 2026-03-10T07:21:07.390126+0000 mgr.vm05.wnsmpp (mgr.14195) 191 : audit [DBG] from='client.14510 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:08.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:08 vm05 bash[17520]: audit 2026-03-10T07:21:07.390126+0000 mgr.vm05.wnsmpp (mgr.14195) 191 : audit [DBG] from='client.14510 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:08.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:08 vm09 bash[21099]: cluster 2026-03-10T07:21:06.696929+0000 mgr.vm05.wnsmpp (mgr.14195) 190 : cluster [DBG] pgmap v122: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:08.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:08 vm09 bash[21099]: cluster 2026-03-10T07:21:06.696929+0000 mgr.vm05.wnsmpp (mgr.14195) 190 : cluster [DBG] pgmap v122: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:08.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:08 vm09 bash[21099]: audit 2026-03-10T07:21:07.390126+0000 mgr.vm05.wnsmpp (mgr.14195) 191 : audit [DBG] from='client.14510 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:08.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:08 vm09 bash[21099]: audit 2026-03-10T07:21:07.390126+0000 mgr.vm05.wnsmpp (mgr.14195) 191 : audit [DBG] from='client.14510 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:08.444 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch ls -f json 2026-03-10T07:21:09.881 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:09 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:09.881 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:09 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:10.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:10 vm09 bash[21099]: cluster 2026-03-10T07:21:08.697305+0000 mgr.vm05.wnsmpp (mgr.14195) 192 : cluster [DBG] pgmap v123: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:10.431 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:10 vm09 bash[21099]: cluster 2026-03-10T07:21:08.697305+0000 mgr.vm05.wnsmpp (mgr.14195) 192 : cluster [DBG] pgmap v123: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:10.432 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:10 vm09 bash[21099]: audit 2026-03-10T07:21:09.914863+0000 mon.vm05 (mon.0) 773 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:10.432 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:10 vm09 bash[21099]: audit 2026-03-10T07:21:09.914863+0000 mon.vm05 (mon.0) 773 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:10.432 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:10 vm09 bash[21099]: audit 2026-03-10T07:21:09.925878+0000 mon.vm05 (mon.0) 774 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:10.432 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:10 vm09 bash[21099]: audit 2026-03-10T07:21:09.925878+0000 mon.vm05 (mon.0) 774 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:10.432 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:10 vm09 bash[21099]: audit 2026-03-10T07:21:09.931655+0000 mon.vm05 (mon.0) 775 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:10.432 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:10 vm09 bash[21099]: audit 2026-03-10T07:21:09.931655+0000 mon.vm05 (mon.0) 775 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:10.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:10 vm05 bash[17520]: cluster 2026-03-10T07:21:08.697305+0000 mgr.vm05.wnsmpp (mgr.14195) 192 : cluster [DBG] pgmap v123: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:10.470 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:10 vm05 bash[17520]: cluster 2026-03-10T07:21:08.697305+0000 mgr.vm05.wnsmpp (mgr.14195) 192 : cluster [DBG] pgmap v123: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:10.470 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:10 vm05 bash[17520]: audit 2026-03-10T07:21:09.914863+0000 mon.vm05 (mon.0) 773 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:10.470 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:10 vm05 bash[17520]: audit 2026-03-10T07:21:09.914863+0000 mon.vm05 (mon.0) 773 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:10.470 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:10 vm05 bash[17520]: audit 2026-03-10T07:21:09.925878+0000 mon.vm05 (mon.0) 774 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:10.470 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:10 vm05 bash[17520]: audit 2026-03-10T07:21:09.925878+0000 mon.vm05 (mon.0) 774 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:10.470 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:10 vm05 bash[17520]: audit 2026-03-10T07:21:09.931655+0000 mon.vm05 (mon.0) 775 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:10.470 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:10 vm05 bash[17520]: audit 2026-03-10T07:21:09.931655+0000 mon.vm05 (mon.0) 775 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:11.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:11 vm05 bash[17520]: cephadm 2026-03-10T07:21:09.933552+0000 mgr.vm05.wnsmpp (mgr.14195) 193 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm09 interface ens3 2026-03-10T07:21:11.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:11 vm05 bash[17520]: cephadm 2026-03-10T07:21:09.933552+0000 mgr.vm05.wnsmpp (mgr.14195) 193 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm09 interface ens3 2026-03-10T07:21:11.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:11 vm05 bash[17520]: cephadm 2026-03-10T07:21:09.933594+0000 mgr.vm05.wnsmpp (mgr.14195) 194 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm05 interface ens3 2026-03-10T07:21:11.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:11 vm05 bash[17520]: cephadm 2026-03-10T07:21:09.933594+0000 mgr.vm05.wnsmpp (mgr.14195) 194 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm05 interface ens3 2026-03-10T07:21:11.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:11 vm05 bash[17520]: cephadm 2026-03-10T07:21:09.933907+0000 mgr.vm05.wnsmpp (mgr.14195) 195 : cephadm [INF] Deploying daemon keepalived.nfs.foo.vm09.ydtazh on vm09 2026-03-10T07:21:11.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:11 vm05 bash[17520]: cephadm 2026-03-10T07:21:09.933907+0000 mgr.vm05.wnsmpp (mgr.14195) 195 : cephadm [INF] Deploying daemon keepalived.nfs.foo.vm09.ydtazh on vm09 2026-03-10T07:21:11.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:11 vm09 bash[21099]: cephadm 2026-03-10T07:21:09.933552+0000 mgr.vm05.wnsmpp (mgr.14195) 193 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm09 interface ens3 2026-03-10T07:21:11.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:11 vm09 bash[21099]: cephadm 2026-03-10T07:21:09.933552+0000 mgr.vm05.wnsmpp (mgr.14195) 193 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm09 interface ens3 2026-03-10T07:21:11.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:11 vm09 bash[21099]: cephadm 2026-03-10T07:21:09.933594+0000 mgr.vm05.wnsmpp (mgr.14195) 194 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm05 interface ens3 2026-03-10T07:21:11.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:11 vm09 bash[21099]: cephadm 2026-03-10T07:21:09.933594+0000 mgr.vm05.wnsmpp (mgr.14195) 194 : cephadm [INF] 12.12.1.105 is in 12.12.0.0/22 on vm05 interface ens3 2026-03-10T07:21:11.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:11 vm09 bash[21099]: cephadm 2026-03-10T07:21:09.933907+0000 mgr.vm05.wnsmpp (mgr.14195) 195 : cephadm [INF] Deploying daemon keepalived.nfs.foo.vm09.ydtazh on vm09 2026-03-10T07:21:11.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:11 vm09 bash[21099]: cephadm 2026-03-10T07:21:09.933907+0000 mgr.vm05.wnsmpp (mgr.14195) 195 : cephadm [INF] Deploying daemon keepalived.nfs.foo.vm09.ydtazh on vm09 2026-03-10T07:21:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:12 vm05 bash[17520]: cluster 2026-03-10T07:21:10.697702+0000 mgr.vm05.wnsmpp (mgr.14195) 196 : cluster [DBG] pgmap v124: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:12.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:12 vm05 bash[17520]: cluster 2026-03-10T07:21:10.697702+0000 mgr.vm05.wnsmpp (mgr.14195) 196 : cluster [DBG] pgmap v124: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:12.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:12 vm09 bash[21099]: cluster 2026-03-10T07:21:10.697702+0000 mgr.vm05.wnsmpp (mgr.14195) 196 : cluster [DBG] pgmap v124: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:12.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:12 vm09 bash[21099]: cluster 2026-03-10T07:21:10.697702+0000 mgr.vm05.wnsmpp (mgr.14195) 196 : cluster [DBG] pgmap v124: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:13.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:13 vm05 bash[17520]: audit 2026-03-10T07:21:12.675674+0000 mon.vm05 (mon.0) 776 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:13.460 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:13 vm05 bash[17520]: audit 2026-03-10T07:21:12.675674+0000 mon.vm05 (mon.0) 776 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:13.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:13 vm09 bash[21099]: audit 2026-03-10T07:21:12.675674+0000 mon.vm05 (mon.0) 776 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:13.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:13 vm09 bash[21099]: audit 2026-03-10T07:21:12.675674+0000 mon.vm05 (mon.0) 776 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:14.105 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:21:14.354 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:21:14.354 INFO:teuthology.orchestra.run.vm05.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T07:16:57.250451Z", "last_refresh": "2026-03-10T07:20:44.566682Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:17:50.632506Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T07:16:55.873736Z", "last_refresh": "2026-03-10T07:20:44.314122Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:51.461998Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T07:16:55.599449Z", "last_refresh": "2026-03-10T07:20:44.314215Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T07:16:56.598903Z", "last_refresh": "2026-03-10T07:20:44.566838Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:21:09.932016Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9002, "virtual_ip": "12.12.1.105/22"}, "status": {"created": "2026-03-10T07:20:38.184040Z", "ports": [2049, 9002], "running": 0, "size": 4, "virtual_ip": "12.12.1.105/22"}}, {"events": ["2026-03-10T07:20:39.165253Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T07:20:33.161016Z", "last_refresh": "2026-03-10T07:20:44.314280Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.987383Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T07:16:55.321760Z", "last_refresh": "2026-03-10T07:20:44.314244Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:54.139000Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm05:192.168.123.105=vm05", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T07:17:37.480725Z", "last_refresh": "2026-03-10T07:20:44.314185Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:20:46.730797Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T07:20:44.641942Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\"", "2026-03-10T07:20:44.691372Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 2}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 12049}, "status": {"created": "2026-03-10T07:20:38.179287Z", "ports": [12049], "running": 0, "size": 2}}, {"events": ["2026-03-10T07:17:52.164679Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T07:16:56.934008Z", "last_refresh": "2026-03-10T07:20:44.314350Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:18:12.645291Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T07:18:12.640395Z", "last_refresh": "2026-03-10T07:20:44.314028Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T07:17:54.141548Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T07:16:56.184682Z", "last_refresh": "2026-03-10T07:20:44.566734Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T07:21:14.410 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:14 vm05 bash[17520]: cluster 2026-03-10T07:21:12.698033+0000 mgr.vm05.wnsmpp (mgr.14195) 197 : cluster [DBG] pgmap v125: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:14.410 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:14 vm05 bash[17520]: cluster 2026-03-10T07:21:12.698033+0000 mgr.vm05.wnsmpp (mgr.14195) 197 : cluster [DBG] pgmap v125: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:14.412 INFO:tasks.cephadm:nfs.foo has 0/2 2026-03-10T07:21:14.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:14 vm09 bash[21099]: cluster 2026-03-10T07:21:12.698033+0000 mgr.vm05.wnsmpp (mgr.14195) 197 : cluster [DBG] pgmap v125: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:14.674 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:14 vm09 bash[21099]: cluster 2026-03-10T07:21:12.698033+0000 mgr.vm05.wnsmpp (mgr.14195) 197 : cluster [DBG] pgmap v125: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:15.413 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch ls -f json 2026-03-10T07:21:15.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:15 vm09 bash[21099]: audit 2026-03-10T07:21:14.353230+0000 mgr.vm05.wnsmpp (mgr.14195) 198 : audit [DBG] from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:15.667 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:15 vm09 bash[21099]: audit 2026-03-10T07:21:14.353230+0000 mgr.vm05.wnsmpp (mgr.14195) 198 : audit [DBG] from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:15.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:15 vm05 bash[17520]: audit 2026-03-10T07:21:14.353230+0000 mgr.vm05.wnsmpp (mgr.14195) 198 : audit [DBG] from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:15.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:15 vm05 bash[17520]: audit 2026-03-10T07:21:14.353230+0000 mgr.vm05.wnsmpp (mgr.14195) 198 : audit [DBG] from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:16.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:16 vm09 bash[21099]: cluster 2026-03-10T07:21:14.698426+0000 mgr.vm05.wnsmpp (mgr.14195) 199 : cluster [DBG] pgmap v126: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:16.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:16 vm09 bash[21099]: cluster 2026-03-10T07:21:14.698426+0000 mgr.vm05.wnsmpp (mgr.14195) 199 : cluster [DBG] pgmap v126: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:16.424 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:16 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:16.702 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:16 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:16.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:16 vm05 bash[17520]: cluster 2026-03-10T07:21:14.698426+0000 mgr.vm05.wnsmpp (mgr.14195) 199 : cluster [DBG] pgmap v126: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:16.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:16 vm05 bash[17520]: cluster 2026-03-10T07:21:14.698426+0000 mgr.vm05.wnsmpp (mgr.14195) 199 : cluster [DBG] pgmap v126: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:17.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:17 vm05 bash[17520]: audit 2026-03-10T07:21:16.683022+0000 mon.vm05 (mon.0) 777 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:17.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:17 vm05 bash[17520]: audit 2026-03-10T07:21:16.683022+0000 mon.vm05 (mon.0) 777 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:17.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:17 vm05 bash[17520]: audit 2026-03-10T07:21:16.687684+0000 mon.vm05 (mon.0) 778 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:17.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:17 vm05 bash[17520]: audit 2026-03-10T07:21:16.687684+0000 mon.vm05 (mon.0) 778 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:17.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:17 vm05 bash[17520]: audit 2026-03-10T07:21:16.690753+0000 mon.vm05 (mon.0) 779 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:17.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:17 vm05 bash[17520]: audit 2026-03-10T07:21:16.690753+0000 mon.vm05 (mon.0) 779 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:17.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:17 vm05 bash[17520]: audit 2026-03-10T07:21:16.693587+0000 mon.vm05 (mon.0) 780 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:17.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:17 vm05 bash[17520]: audit 2026-03-10T07:21:16.693587+0000 mon.vm05 (mon.0) 780 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:17.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:17 vm05 bash[17520]: cluster 2026-03-10T07:21:16.699907+0000 mgr.vm05.wnsmpp (mgr.14195) 200 : cluster [DBG] pgmap v127: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:17.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:17 vm05 bash[17520]: cluster 2026-03-10T07:21:16.699907+0000 mgr.vm05.wnsmpp (mgr.14195) 200 : cluster [DBG] pgmap v127: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:17.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:17 vm05 bash[17520]: audit 2026-03-10T07:21:16.708198+0000 mon.vm05 (mon.0) 781 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:17.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:17 vm05 bash[17520]: audit 2026-03-10T07:21:16.708198+0000 mon.vm05 (mon.0) 781 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:17 vm09 bash[21099]: audit 2026-03-10T07:21:16.683022+0000 mon.vm05 (mon.0) 777 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:17 vm09 bash[21099]: audit 2026-03-10T07:21:16.683022+0000 mon.vm05 (mon.0) 777 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:17 vm09 bash[21099]: audit 2026-03-10T07:21:16.687684+0000 mon.vm05 (mon.0) 778 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:17 vm09 bash[21099]: audit 2026-03-10T07:21:16.687684+0000 mon.vm05 (mon.0) 778 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:17 vm09 bash[21099]: audit 2026-03-10T07:21:16.690753+0000 mon.vm05 (mon.0) 779 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:17 vm09 bash[21099]: audit 2026-03-10T07:21:16.690753+0000 mon.vm05 (mon.0) 779 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:17 vm09 bash[21099]: audit 2026-03-10T07:21:16.693587+0000 mon.vm05 (mon.0) 780 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:17 vm09 bash[21099]: audit 2026-03-10T07:21:16.693587+0000 mon.vm05 (mon.0) 780 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:17 vm09 bash[21099]: cluster 2026-03-10T07:21:16.699907+0000 mgr.vm05.wnsmpp (mgr.14195) 200 : cluster [DBG] pgmap v127: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:17 vm09 bash[21099]: cluster 2026-03-10T07:21:16.699907+0000 mgr.vm05.wnsmpp (mgr.14195) 200 : cluster [DBG] pgmap v127: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:17 vm09 bash[21099]: audit 2026-03-10T07:21:16.708198+0000 mon.vm05 (mon.0) 781 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:18.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:17 vm09 bash[21099]: audit 2026-03-10T07:21:16.708198+0000 mon.vm05 (mon.0) 781 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:20.049 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:19 vm09 bash[21099]: cluster 2026-03-10T07:21:18.700289+0000 mgr.vm05.wnsmpp (mgr.14195) 201 : cluster [DBG] pgmap v128: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:20.049 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:19 vm09 bash[21099]: cluster 2026-03-10T07:21:18.700289+0000 mgr.vm05.wnsmpp (mgr.14195) 201 : cluster [DBG] pgmap v128: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:20.049 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:19 vm09 bash[21099]: audit 2026-03-10T07:21:19.558367+0000 mon.vm05 (mon.0) 782 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:20.049 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:19 vm09 bash[21099]: audit 2026-03-10T07:21:19.558367+0000 mon.vm05 (mon.0) 782 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:20.059 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:21:20.076 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:19 vm05 bash[17520]: cluster 2026-03-10T07:21:18.700289+0000 mgr.vm05.wnsmpp (mgr.14195) 201 : cluster [DBG] pgmap v128: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:20.076 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:19 vm05 bash[17520]: cluster 2026-03-10T07:21:18.700289+0000 mgr.vm05.wnsmpp (mgr.14195) 201 : cluster [DBG] pgmap v128: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:20.076 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:19 vm05 bash[17520]: audit 2026-03-10T07:21:19.558367+0000 mon.vm05 (mon.0) 782 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:20.076 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:19 vm05 bash[17520]: audit 2026-03-10T07:21:19.558367+0000 mon.vm05 (mon.0) 782 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:20.322 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:21:20.322 INFO:teuthology.orchestra.run.vm05.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T07:16:57.250451Z", "last_refresh": "2026-03-10T07:20:44.566682Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:17:50.632506Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T07:16:55.873736Z", "last_refresh": "2026-03-10T07:20:44.314122Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:51.461998Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T07:16:55.599449Z", "last_refresh": "2026-03-10T07:20:44.314215Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T07:16:56.598903Z", "last_refresh": "2026-03-10T07:20:44.566838Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:21:16.693737Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9002, "virtual_ip": "12.12.1.105/22"}, "status": {"created": "2026-03-10T07:20:38.184040Z", "ports": [2049, 9002], "running": 0, "size": 4, "virtual_ip": "12.12.1.105/22"}}, {"events": ["2026-03-10T07:20:39.165253Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T07:20:33.161016Z", "last_refresh": "2026-03-10T07:20:44.314280Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.987383Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T07:16:55.321760Z", "last_refresh": "2026-03-10T07:20:44.314244Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:54.139000Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm05:192.168.123.105=vm05", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T07:17:37.480725Z", "last_refresh": "2026-03-10T07:20:44.314185Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:20:46.730797Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T07:20:44.641942Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\"", "2026-03-10T07:20:44.691372Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 2}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 12049}, "status": {"created": "2026-03-10T07:20:38.179287Z", "ports": [12049], "running": 0, "size": 2}}, {"events": ["2026-03-10T07:17:52.164679Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T07:16:56.934008Z", "last_refresh": "2026-03-10T07:20:44.314350Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:18:12.645291Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T07:18:12.640395Z", "last_refresh": "2026-03-10T07:20:44.314028Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T07:17:54.141548Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T07:16:56.184682Z", "last_refresh": "2026-03-10T07:20:44.566734Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T07:21:20.375 INFO:tasks.cephadm:nfs.foo has 0/2 2026-03-10T07:21:21.072 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:20 vm05 bash[17520]: audit 2026-03-10T07:21:20.321379+0000 mgr.vm05.wnsmpp (mgr.14195) 202 : audit [DBG] from='client.14518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:21.072 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:20 vm05 bash[17520]: audit 2026-03-10T07:21:20.321379+0000 mgr.vm05.wnsmpp (mgr.14195) 202 : audit [DBG] from='client.14518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:21.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:20 vm09 bash[21099]: audit 2026-03-10T07:21:20.321379+0000 mgr.vm05.wnsmpp (mgr.14195) 202 : audit [DBG] from='client.14518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:21.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:20 vm09 bash[21099]: audit 2026-03-10T07:21:20.321379+0000 mgr.vm05.wnsmpp (mgr.14195) 202 : audit [DBG] from='client.14518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:21.375 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch ls -f json 2026-03-10T07:21:21.997 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:21 vm05 bash[17520]: cluster 2026-03-10T07:21:20.700710+0000 mgr.vm05.wnsmpp (mgr.14195) 203 : cluster [DBG] pgmap v129: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:21.997 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:21 vm05 bash[17520]: cluster 2026-03-10T07:21:20.700710+0000 mgr.vm05.wnsmpp (mgr.14195) 203 : cluster [DBG] pgmap v129: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:22.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:21 vm09 bash[21099]: cluster 2026-03-10T07:21:20.700710+0000 mgr.vm05.wnsmpp (mgr.14195) 203 : cluster [DBG] pgmap v129: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:22.179 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:21 vm09 bash[21099]: cluster 2026-03-10T07:21:20.700710+0000 mgr.vm05.wnsmpp (mgr.14195) 203 : cluster [DBG] pgmap v129: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:22.925 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:21.846903+0000 mon.vm05 (mon.0) 783 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.925 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:21.846903+0000 mon.vm05 (mon.0) 783 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:21.858445+0000 mon.vm05 (mon.0) 784 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:21.858445+0000 mon.vm05 (mon.0) 784 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:21.945592+0000 mon.vm05 (mon.0) 785 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:21.945592+0000 mon.vm05 (mon.0) 785 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:21.951081+0000 mon.vm05 (mon.0) 786 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:21.951081+0000 mon.vm05 (mon.0) 786 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.262297+0000 mon.vm05 (mon.0) 787 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.262297+0000 mon.vm05 (mon.0) 787 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.262971+0000 mon.vm05 (mon.0) 788 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.262971+0000 mon.vm05 (mon.0) 788 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.267930+0000 mon.vm05 (mon.0) 789 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.267930+0000 mon.vm05 (mon.0) 789 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.270145+0000 mon.vm05 (mon.0) 790 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.270145+0000 mon.vm05 (mon.0) 790 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cluster 2026-03-10T07:21:22.270403+0000 mgr.vm05.wnsmpp (mgr.14195) 204 : cluster [DBG] pgmap v130: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cluster 2026-03-10T07:21:22.270403+0000 mgr.vm05.wnsmpp (mgr.14195) 204 : cluster [DBG] pgmap v130: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.277339+0000 mon.vm05 (mon.0) 791 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.277339+0000 mon.vm05 (mon.0) 791 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.277687+0000 mgr.vm05.wnsmpp (mgr.14195) 205 : cephadm [INF] Fencing old nfs.foo.0.0.vm05.adjxhw 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.277687+0000 mgr.vm05.wnsmpp (mgr.14195) 205 : cephadm [INF] Fencing old nfs.foo.0.0.vm05.adjxhw 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.277904+0000 mon.vm05 (mon.0) 792 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.0.0.vm05.adjxhw"}]: dispatch 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.277904+0000 mon.vm05 (mon.0) 792 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.0.0.vm05.adjxhw"}]: dispatch 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.279835+0000 mon.vm05 (mon.0) 793 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.0.0.vm05.adjxhw"}]': finished 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.279835+0000 mon.vm05 (mon.0) 793 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.0.0.vm05.adjxhw"}]': finished 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.284701+0000 mon.vm05 (mon.0) 794 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.284701+0000 mon.vm05 (mon.0) 794 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.285111+0000 mgr.vm05.wnsmpp (mgr.14195) 206 : cephadm [INF] Fencing old nfs.foo.1.0.vm09.pgwkva 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.285111+0000 mgr.vm05.wnsmpp (mgr.14195) 206 : cephadm [INF] Fencing old nfs.foo.1.0.vm09.pgwkva 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.285336+0000 mon.vm05 (mon.0) 795 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.vm09.pgwkva"}]: dispatch 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.285336+0000 mon.vm05 (mon.0) 795 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.vm09.pgwkva"}]: dispatch 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.287382+0000 mon.vm05 (mon.0) 796 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.vm09.pgwkva"}]': finished 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.287382+0000 mon.vm05 (mon.0) 796 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.vm09.pgwkva"}]': finished 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.292277+0000 mon.vm05 (mon.0) 797 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.292277+0000 mon.vm05 (mon.0) 797 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.292857+0000 mgr.vm05.wnsmpp (mgr.14195) 207 : cephadm [INF] Creating key for client.nfs.foo.0.1.vm05.etqrmm 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.292857+0000 mgr.vm05.wnsmpp (mgr.14195) 207 : cephadm [INF] Creating key for client.nfs.foo.0.1.vm05.etqrmm 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.293036+0000 mon.vm05 (mon.0) 798 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.293036+0000 mon.vm05 (mon.0) 798 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.294877+0000 mon.vm05 (mon.0) 799 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:21:22.926 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.294877+0000 mon.vm05 (mon.0) 799 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.296718+0000 mgr.vm05.wnsmpp (mgr.14195) 208 : cephadm [INF] Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.296718+0000 mgr.vm05.wnsmpp (mgr.14195) 208 : cephadm [INF] Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.296868+0000 mon.vm05 (mon.0) 800 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.296868+0000 mon.vm05 (mon.0) 800 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.298467+0000 mon.vm05 (mon.0) 801 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.298467+0000 mon.vm05 (mon.0) 801 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.300467+0000 mon.vm05 (mon.0) 802 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.300467+0000 mon.vm05 (mon.0) 802 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.333552+0000 mon.vm05 (mon.0) 803 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.333552+0000 mon.vm05 (mon.0) 803 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.335759+0000 mon.vm05 (mon.0) 804 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.335759+0000 mon.vm05 (mon.0) 804 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.375262+0000 mgr.vm05.wnsmpp (mgr.14195) 209 : cephadm [INF] Creating rados config object: conf-nfs.foo 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.375262+0000 mgr.vm05.wnsmpp (mgr.14195) 209 : cephadm [INF] Creating rados config object: conf-nfs.foo 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.415537+0000 mgr.vm05.wnsmpp (mgr.14195) 210 : cephadm [INF] Creating key for client.nfs.foo.0.1.vm05.etqrmm-rgw 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.415537+0000 mgr.vm05.wnsmpp (mgr.14195) 210 : cephadm [INF] Creating key for client.nfs.foo.0.1.vm05.etqrmm-rgw 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.415820+0000 mon.vm05 (mon.0) 805 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.415820+0000 mon.vm05 (mon.0) 805 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.418131+0000 mon.vm05 (mon.0) 806 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.418131+0000 mon.vm05 (mon.0) 806 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.421690+0000 mgr.vm05.wnsmpp (mgr.14195) 211 : cephadm [WRN] Bind address in nfs.foo.0.1.vm05.etqrmm's ganesha conf is defaulting to empty 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.421690+0000 mgr.vm05.wnsmpp (mgr.14195) 211 : cephadm [WRN] Bind address in nfs.foo.0.1.vm05.etqrmm's ganesha conf is defaulting to empty 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.423982+0000 mon.vm05 (mon.0) 807 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: audit 2026-03-10T07:21:22.423982+0000 mon.vm05 (mon.0) 807 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.425015+0000 mgr.vm05.wnsmpp (mgr.14195) 212 : cephadm [INF] Deploying daemon nfs.foo.0.1.vm05.etqrmm on vm05 2026-03-10T07:21:22.927 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 bash[17520]: cephadm 2026-03-10T07:21:22.425015+0000 mgr.vm05.wnsmpp (mgr.14195) 212 : cephadm [INF] Deploying daemon nfs.foo.0.1.vm05.etqrmm on vm05 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:21.846903+0000 mon.vm05 (mon.0) 783 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:21.846903+0000 mon.vm05 (mon.0) 783 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:21.858445+0000 mon.vm05 (mon.0) 784 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:21.858445+0000 mon.vm05 (mon.0) 784 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:21.945592+0000 mon.vm05 (mon.0) 785 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:21.945592+0000 mon.vm05 (mon.0) 785 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:21.951081+0000 mon.vm05 (mon.0) 786 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:21.951081+0000 mon.vm05 (mon.0) 786 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.262297+0000 mon.vm05 (mon.0) 787 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.262297+0000 mon.vm05 (mon.0) 787 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.262971+0000 mon.vm05 (mon.0) 788 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.262971+0000 mon.vm05 (mon.0) 788 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.267930+0000 mon.vm05 (mon.0) 789 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.267930+0000 mon.vm05 (mon.0) 789 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.270145+0000 mon.vm05 (mon.0) 790 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.270145+0000 mon.vm05 (mon.0) 790 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cluster 2026-03-10T07:21:22.270403+0000 mgr.vm05.wnsmpp (mgr.14195) 204 : cluster [DBG] pgmap v130: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cluster 2026-03-10T07:21:22.270403+0000 mgr.vm05.wnsmpp (mgr.14195) 204 : cluster [DBG] pgmap v130: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.277339+0000 mon.vm05 (mon.0) 791 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.277339+0000 mon.vm05 (mon.0) 791 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.277687+0000 mgr.vm05.wnsmpp (mgr.14195) 205 : cephadm [INF] Fencing old nfs.foo.0.0.vm05.adjxhw 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.277687+0000 mgr.vm05.wnsmpp (mgr.14195) 205 : cephadm [INF] Fencing old nfs.foo.0.0.vm05.adjxhw 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.277904+0000 mon.vm05 (mon.0) 792 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.0.0.vm05.adjxhw"}]: dispatch 2026-03-10T07:21:23.174 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.277904+0000 mon.vm05 (mon.0) 792 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.0.0.vm05.adjxhw"}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.279835+0000 mon.vm05 (mon.0) 793 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.0.0.vm05.adjxhw"}]': finished 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.279835+0000 mon.vm05 (mon.0) 793 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.0.0.vm05.adjxhw"}]': finished 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.284701+0000 mon.vm05 (mon.0) 794 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.284701+0000 mon.vm05 (mon.0) 794 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.285111+0000 mgr.vm05.wnsmpp (mgr.14195) 206 : cephadm [INF] Fencing old nfs.foo.1.0.vm09.pgwkva 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.285111+0000 mgr.vm05.wnsmpp (mgr.14195) 206 : cephadm [INF] Fencing old nfs.foo.1.0.vm09.pgwkva 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.285336+0000 mon.vm05 (mon.0) 795 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.vm09.pgwkva"}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.285336+0000 mon.vm05 (mon.0) 795 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.vm09.pgwkva"}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.287382+0000 mon.vm05 (mon.0) 796 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.vm09.pgwkva"}]': finished 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.287382+0000 mon.vm05 (mon.0) 796 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.vm09.pgwkva"}]': finished 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.292277+0000 mon.vm05 (mon.0) 797 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.292277+0000 mon.vm05 (mon.0) 797 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.292857+0000 mgr.vm05.wnsmpp (mgr.14195) 207 : cephadm [INF] Creating key for client.nfs.foo.0.1.vm05.etqrmm 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.292857+0000 mgr.vm05.wnsmpp (mgr.14195) 207 : cephadm [INF] Creating key for client.nfs.foo.0.1.vm05.etqrmm 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.293036+0000 mon.vm05 (mon.0) 798 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.293036+0000 mon.vm05 (mon.0) 798 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.294877+0000 mon.vm05 (mon.0) 799 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.294877+0000 mon.vm05 (mon.0) 799 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.296718+0000 mgr.vm05.wnsmpp (mgr.14195) 208 : cephadm [INF] Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.296718+0000 mgr.vm05.wnsmpp (mgr.14195) 208 : cephadm [INF] Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.296868+0000 mon.vm05 (mon.0) 800 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.296868+0000 mon.vm05 (mon.0) 800 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.298467+0000 mon.vm05 (mon.0) 801 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.298467+0000 mon.vm05 (mon.0) 801 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.300467+0000 mon.vm05 (mon.0) 802 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.300467+0000 mon.vm05 (mon.0) 802 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.333552+0000 mon.vm05 (mon.0) 803 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.333552+0000 mon.vm05 (mon.0) 803 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.335759+0000 mon.vm05 (mon.0) 804 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.335759+0000 mon.vm05 (mon.0) 804 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.375262+0000 mgr.vm05.wnsmpp (mgr.14195) 209 : cephadm [INF] Creating rados config object: conf-nfs.foo 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.375262+0000 mgr.vm05.wnsmpp (mgr.14195) 209 : cephadm [INF] Creating rados config object: conf-nfs.foo 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.415537+0000 mgr.vm05.wnsmpp (mgr.14195) 210 : cephadm [INF] Creating key for client.nfs.foo.0.1.vm05.etqrmm-rgw 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.415537+0000 mgr.vm05.wnsmpp (mgr.14195) 210 : cephadm [INF] Creating key for client.nfs.foo.0.1.vm05.etqrmm-rgw 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.415820+0000 mon.vm05 (mon.0) 805 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.415820+0000 mon.vm05 (mon.0) 805 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.418131+0000 mon.vm05 (mon.0) 806 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.418131+0000 mon.vm05 (mon.0) 806 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm05.etqrmm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.421690+0000 mgr.vm05.wnsmpp (mgr.14195) 211 : cephadm [WRN] Bind address in nfs.foo.0.1.vm05.etqrmm's ganesha conf is defaulting to empty 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.421690+0000 mgr.vm05.wnsmpp (mgr.14195) 211 : cephadm [WRN] Bind address in nfs.foo.0.1.vm05.etqrmm's ganesha conf is defaulting to empty 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.423982+0000 mon.vm05 (mon.0) 807 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: audit 2026-03-10T07:21:22.423982+0000 mon.vm05 (mon.0) 807 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.425015+0000 mgr.vm05.wnsmpp (mgr.14195) 212 : cephadm [INF] Deploying daemon nfs.foo.0.1.vm05.etqrmm on vm05 2026-03-10T07:21:23.175 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:22 vm09 bash[21099]: cephadm 2026-03-10T07:21:22.425015+0000 mgr.vm05.wnsmpp (mgr.14195) 212 : cephadm [INF] Deploying daemon nfs.foo.0.1.vm05.etqrmm on vm05 2026-03-10T07:21:23.195 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:22 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:23.195 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:23 vm05 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:24.105 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:23 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 systemd[1]: /etc/systemd/system/ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.226072+0000 mon.vm05 (mon.0) 808 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.226072+0000 mon.vm05 (mon.0) 808 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.237221+0000 mon.vm05 (mon.0) 809 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.237221+0000 mon.vm05 (mon.0) 809 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.242591+0000 mon.vm05 (mon.0) 810 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.242591+0000 mon.vm05 (mon.0) 810 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cephadm 2026-03-10T07:21:23.243099+0000 mgr.vm05.wnsmpp (mgr.14195) 213 : cephadm [INF] Creating key for client.nfs.foo.1.1.vm09.diytrs 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cephadm 2026-03-10T07:21:23.243099+0000 mgr.vm05.wnsmpp (mgr.14195) 213 : cephadm [INF] Creating key for client.nfs.foo.1.1.vm09.diytrs 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.245350+0000 mon.vm05 (mon.0) 811 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.245350+0000 mon.vm05 (mon.0) 811 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.251475+0000 mon.vm05 (mon.0) 812 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.251475+0000 mon.vm05 (mon.0) 812 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:21:24.359 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cephadm 2026-03-10T07:21:23.254017+0000 mgr.vm05.wnsmpp (mgr.14195) 214 : cephadm [INF] Ensuring nfs.foo.1 is in the ganesha grace table 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cephadm 2026-03-10T07:21:23.254017+0000 mgr.vm05.wnsmpp (mgr.14195) 214 : cephadm [INF] Ensuring nfs.foo.1 is in the ganesha grace table 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.256655+0000 mon.vm05 (mon.0) 813 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.256655+0000 mon.vm05 (mon.0) 813 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.258422+0000 mon.vm05 (mon.0) 814 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.258422+0000 mon.vm05 (mon.0) 814 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.261527+0000 mon.vm05 (mon.0) 815 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.261527+0000 mon.vm05 (mon.0) 815 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cluster 2026-03-10T07:21:23.266117+0000 mon.vm05 (mon.0) 816 : cluster [INF] Health check cleared: CEPHADM_DAEMON_PLACE_FAIL (was: Failed to place 2 daemon(s)) 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cluster 2026-03-10T07:21:23.266117+0000 mon.vm05 (mon.0) 816 : cluster [INF] Health check cleared: CEPHADM_DAEMON_PLACE_FAIL (was: Failed to place 2 daemon(s)) 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cluster 2026-03-10T07:21:23.266127+0000 mon.vm05 (mon.0) 817 : cluster [INF] Cluster is now healthy 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cluster 2026-03-10T07:21:23.266127+0000 mon.vm05 (mon.0) 817 : cluster [INF] Cluster is now healthy 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.299860+0000 mon.vm05 (mon.0) 818 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.299860+0000 mon.vm05 (mon.0) 818 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.302734+0000 mon.vm05 (mon.0) 819 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.302734+0000 mon.vm05 (mon.0) 819 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cephadm 2026-03-10T07:21:23.355471+0000 mgr.vm05.wnsmpp (mgr.14195) 215 : cephadm [INF] Rados config object exists: conf-nfs.foo 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cephadm 2026-03-10T07:21:23.355471+0000 mgr.vm05.wnsmpp (mgr.14195) 215 : cephadm [INF] Rados config object exists: conf-nfs.foo 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cephadm 2026-03-10T07:21:23.355533+0000 mgr.vm05.wnsmpp (mgr.14195) 216 : cephadm [INF] Creating key for client.nfs.foo.1.1.vm09.diytrs-rgw 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cephadm 2026-03-10T07:21:23.355533+0000 mgr.vm05.wnsmpp (mgr.14195) 216 : cephadm [INF] Creating key for client.nfs.foo.1.1.vm09.diytrs-rgw 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.355812+0000 mon.vm05 (mon.0) 820 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.355812+0000 mon.vm05 (mon.0) 820 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.357991+0000 mon.vm05 (mon.0) 821 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.357991+0000 mon.vm05 (mon.0) 821 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cephadm 2026-03-10T07:21:23.360753+0000 mgr.vm05.wnsmpp (mgr.14195) 217 : cephadm [WRN] Bind address in nfs.foo.1.1.vm09.diytrs's ganesha conf is defaulting to empty 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cephadm 2026-03-10T07:21:23.360753+0000 mgr.vm05.wnsmpp (mgr.14195) 217 : cephadm [WRN] Bind address in nfs.foo.1.1.vm09.diytrs's ganesha conf is defaulting to empty 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.361143+0000 mon.vm05 (mon.0) 822 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:23.361143+0000 mon.vm05 (mon.0) 822 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cephadm 2026-03-10T07:21:23.362066+0000 mgr.vm05.wnsmpp (mgr.14195) 218 : cephadm [INF] Deploying daemon nfs.foo.1.1.vm09.diytrs on vm09 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: cephadm 2026-03-10T07:21:23.362066+0000 mgr.vm05.wnsmpp (mgr.14195) 218 : cephadm [INF] Deploying daemon nfs.foo.1.1.vm09.diytrs on vm09 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:24.221417+0000 mon.vm05 (mon.0) 823 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.360 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:24 vm09 bash[21099]: audit 2026-03-10T07:21:24.221417+0000 mon.vm05 (mon.0) 823 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.226072+0000 mon.vm05 (mon.0) 808 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.226072+0000 mon.vm05 (mon.0) 808 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.237221+0000 mon.vm05 (mon.0) 809 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.237221+0000 mon.vm05 (mon.0) 809 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.242591+0000 mon.vm05 (mon.0) 810 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.242591+0000 mon.vm05 (mon.0) 810 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cephadm 2026-03-10T07:21:23.243099+0000 mgr.vm05.wnsmpp (mgr.14195) 213 : cephadm [INF] Creating key for client.nfs.foo.1.1.vm09.diytrs 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cephadm 2026-03-10T07:21:23.243099+0000 mgr.vm05.wnsmpp (mgr.14195) 213 : cephadm [INF] Creating key for client.nfs.foo.1.1.vm09.diytrs 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.245350+0000 mon.vm05 (mon.0) 811 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.245350+0000 mon.vm05 (mon.0) 811 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.251475+0000 mon.vm05 (mon.0) 812 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.251475+0000 mon.vm05 (mon.0) 812 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cephadm 2026-03-10T07:21:23.254017+0000 mgr.vm05.wnsmpp (mgr.14195) 214 : cephadm [INF] Ensuring nfs.foo.1 is in the ganesha grace table 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cephadm 2026-03-10T07:21:23.254017+0000 mgr.vm05.wnsmpp (mgr.14195) 214 : cephadm [INF] Ensuring nfs.foo.1 is in the ganesha grace table 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.256655+0000 mon.vm05 (mon.0) 813 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.256655+0000 mon.vm05 (mon.0) 813 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.258422+0000 mon.vm05 (mon.0) 814 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.258422+0000 mon.vm05 (mon.0) 814 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.261527+0000 mon.vm05 (mon.0) 815 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.261527+0000 mon.vm05 (mon.0) 815 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cluster 2026-03-10T07:21:23.266117+0000 mon.vm05 (mon.0) 816 : cluster [INF] Health check cleared: CEPHADM_DAEMON_PLACE_FAIL (was: Failed to place 2 daemon(s)) 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cluster 2026-03-10T07:21:23.266117+0000 mon.vm05 (mon.0) 816 : cluster [INF] Health check cleared: CEPHADM_DAEMON_PLACE_FAIL (was: Failed to place 2 daemon(s)) 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cluster 2026-03-10T07:21:23.266127+0000 mon.vm05 (mon.0) 817 : cluster [INF] Cluster is now healthy 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cluster 2026-03-10T07:21:23.266127+0000 mon.vm05 (mon.0) 817 : cluster [INF] Cluster is now healthy 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.299860+0000 mon.vm05 (mon.0) 818 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.299860+0000 mon.vm05 (mon.0) 818 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.302734+0000 mon.vm05 (mon.0) 819 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.302734+0000 mon.vm05 (mon.0) 819 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cephadm 2026-03-10T07:21:23.355471+0000 mgr.vm05.wnsmpp (mgr.14195) 215 : cephadm [INF] Rados config object exists: conf-nfs.foo 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cephadm 2026-03-10T07:21:23.355471+0000 mgr.vm05.wnsmpp (mgr.14195) 215 : cephadm [INF] Rados config object exists: conf-nfs.foo 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cephadm 2026-03-10T07:21:23.355533+0000 mgr.vm05.wnsmpp (mgr.14195) 216 : cephadm [INF] Creating key for client.nfs.foo.1.1.vm09.diytrs-rgw 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cephadm 2026-03-10T07:21:23.355533+0000 mgr.vm05.wnsmpp (mgr.14195) 216 : cephadm [INF] Creating key for client.nfs.foo.1.1.vm09.diytrs-rgw 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.355812+0000 mon.vm05 (mon.0) 820 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.355812+0000 mon.vm05 (mon.0) 820 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.357991+0000 mon.vm05 (mon.0) 821 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.357991+0000 mon.vm05 (mon.0) 821 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.1.vm09.diytrs-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cephadm 2026-03-10T07:21:23.360753+0000 mgr.vm05.wnsmpp (mgr.14195) 217 : cephadm [WRN] Bind address in nfs.foo.1.1.vm09.diytrs's ganesha conf is defaulting to empty 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cephadm 2026-03-10T07:21:23.360753+0000 mgr.vm05.wnsmpp (mgr.14195) 217 : cephadm [WRN] Bind address in nfs.foo.1.1.vm09.diytrs's ganesha conf is defaulting to empty 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.361143+0000 mon.vm05 (mon.0) 822 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:23.361143+0000 mon.vm05 (mon.0) 822 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cephadm 2026-03-10T07:21:23.362066+0000 mgr.vm05.wnsmpp (mgr.14195) 218 : cephadm [INF] Deploying daemon nfs.foo.1.1.vm09.diytrs on vm09 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: cephadm 2026-03-10T07:21:23.362066+0000 mgr.vm05.wnsmpp (mgr.14195) 218 : cephadm [INF] Deploying daemon nfs.foo.1.1.vm09.diytrs on vm09 2026-03-10T07:21:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:24.221417+0000 mon.vm05 (mon.0) 823 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:24.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:24 vm05 bash[17520]: audit 2026-03-10T07:21:24.221417+0000 mon.vm05 (mon.0) 823 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.106 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:21:25.373 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:21:25.373 INFO:teuthology.orchestra.run.vm05.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T07:16:57.250451Z", "last_refresh": "2026-03-10T07:21:21.840808Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:17:50.632506Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T07:16:55.873736Z", "last_refresh": "2026-03-10T07:21:21.840726Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:51.461998Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T07:16:55.599449Z", "last_refresh": "2026-03-10T07:21:21.840639Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T07:16:56.598903Z", "last_refresh": "2026-03-10T07:21:21.840964Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:21:16.693737Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9002, "virtual_ip": "12.12.1.105/22"}, "status": {"created": "2026-03-10T07:20:38.184040Z", "last_refresh": "2026-03-10T07:21:21.840782Z", "ports": [2049, 9002], "running": 4, "size": 4, "virtual_ip": "12.12.1.105/22"}}, {"events": ["2026-03-10T07:20:39.165253Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T07:20:33.161016Z", "last_refresh": "2026-03-10T07:21:21.840834Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.987383Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T07:16:55.321760Z", "last_refresh": "2026-03-10T07:21:21.840755Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:54.139000Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm05:192.168.123.105=vm05", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T07:17:37.480725Z", "last_refresh": "2026-03-10T07:21:21.840938Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:21:24.250775Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T07:20:44.641942Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\"", "2026-03-10T07:20:44.691372Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 2}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 12049}, "status": {"created": "2026-03-10T07:20:38.179287Z", "ports": [12049], "running": 0, "size": 2}}, {"events": ["2026-03-10T07:17:52.164679Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T07:16:56.934008Z", "last_refresh": "2026-03-10T07:21:21.840698Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:18:12.645291Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T07:18:12.640395Z", "last_refresh": "2026-03-10T07:21:21.840590Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T07:17:54.141548Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T07:16:56.184682Z", "last_refresh": "2026-03-10T07:21:21.840860Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T07:21:25.384 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:25 vm05 bash[17520]: audit 2026-03-10T07:21:24.235707+0000 mon.vm05 (mon.0) 824 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.384 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:25 vm05 bash[17520]: audit 2026-03-10T07:21:24.235707+0000 mon.vm05 (mon.0) 824 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.384 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:25 vm05 bash[17520]: audit 2026-03-10T07:21:24.245109+0000 mon.vm05 (mon.0) 825 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.384 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:25 vm05 bash[17520]: audit 2026-03-10T07:21:24.245109+0000 mon.vm05 (mon.0) 825 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.384 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:25 vm05 bash[17520]: audit 2026-03-10T07:21:24.250518+0000 mon.vm05 (mon.0) 826 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.384 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:25 vm05 bash[17520]: audit 2026-03-10T07:21:24.250518+0000 mon.vm05 (mon.0) 826 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.384 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:25 vm05 bash[17520]: audit 2026-03-10T07:21:24.261358+0000 mon.vm05 (mon.0) 827 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:25.384 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:25 vm05 bash[17520]: audit 2026-03-10T07:21:24.261358+0000 mon.vm05 (mon.0) 827 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:25.384 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:25 vm05 bash[17520]: cluster 2026-03-10T07:21:24.270759+0000 mgr.vm05.wnsmpp (mgr.14195) 219 : cluster [DBG] pgmap v131: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:25.384 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:25 vm05 bash[17520]: cluster 2026-03-10T07:21:24.270759+0000 mgr.vm05.wnsmpp (mgr.14195) 219 : cluster [DBG] pgmap v131: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:25.384 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:25 vm05 bash[17520]: audit 2026-03-10T07:21:24.564145+0000 mon.vm05 (mon.0) 828 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.384 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:25 vm05 bash[17520]: audit 2026-03-10T07:21:24.564145+0000 mon.vm05 (mon.0) 828 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.432 INFO:tasks.cephadm:nfs.foo has 0/2 2026-03-10T07:21:25.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:25 vm09 bash[21099]: audit 2026-03-10T07:21:24.235707+0000 mon.vm05 (mon.0) 824 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:25 vm09 bash[21099]: audit 2026-03-10T07:21:24.235707+0000 mon.vm05 (mon.0) 824 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:25 vm09 bash[21099]: audit 2026-03-10T07:21:24.245109+0000 mon.vm05 (mon.0) 825 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:25 vm09 bash[21099]: audit 2026-03-10T07:21:24.245109+0000 mon.vm05 (mon.0) 825 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:25 vm09 bash[21099]: audit 2026-03-10T07:21:24.250518+0000 mon.vm05 (mon.0) 826 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:25 vm09 bash[21099]: audit 2026-03-10T07:21:24.250518+0000 mon.vm05 (mon.0) 826 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:25 vm09 bash[21099]: audit 2026-03-10T07:21:24.261358+0000 mon.vm05 (mon.0) 827 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:25.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:25 vm09 bash[21099]: audit 2026-03-10T07:21:24.261358+0000 mon.vm05 (mon.0) 827 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:25.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:25 vm09 bash[21099]: cluster 2026-03-10T07:21:24.270759+0000 mgr.vm05.wnsmpp (mgr.14195) 219 : cluster [DBG] pgmap v131: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:25.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:25 vm09 bash[21099]: cluster 2026-03-10T07:21:24.270759+0000 mgr.vm05.wnsmpp (mgr.14195) 219 : cluster [DBG] pgmap v131: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T07:21:25.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:25 vm09 bash[21099]: audit 2026-03-10T07:21:24.564145+0000 mon.vm05 (mon.0) 828 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:25.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:25 vm09 bash[21099]: audit 2026-03-10T07:21:24.564145+0000 mon.vm05 (mon.0) 828 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:26.387 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:26 vm05 bash[17520]: audit 2026-03-10T07:21:25.371796+0000 mgr.vm05.wnsmpp (mgr.14195) 220 : audit [DBG] from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:26.387 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:26 vm05 bash[17520]: audit 2026-03-10T07:21:25.371796+0000 mgr.vm05.wnsmpp (mgr.14195) 220 : audit [DBG] from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:26.433 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch ls -f json 2026-03-10T07:21:26.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:26 vm09 bash[21099]: audit 2026-03-10T07:21:25.371796+0000 mgr.vm05.wnsmpp (mgr.14195) 220 : audit [DBG] from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:26.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:26 vm09 bash[21099]: audit 2026-03-10T07:21:25.371796+0000 mgr.vm05.wnsmpp (mgr.14195) 220 : audit [DBG] from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:27.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:27 vm09 bash[21099]: cluster 2026-03-10T07:21:26.271267+0000 mgr.vm05.wnsmpp (mgr.14195) 221 : cluster [DBG] pgmap v132: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 884 B/s wr, 2 op/s 2026-03-10T07:21:27.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:27 vm09 bash[21099]: cluster 2026-03-10T07:21:26.271267+0000 mgr.vm05.wnsmpp (mgr.14195) 221 : cluster [DBG] pgmap v132: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 884 B/s wr, 2 op/s 2026-03-10T07:21:27.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:27 vm05 bash[17520]: cluster 2026-03-10T07:21:26.271267+0000 mgr.vm05.wnsmpp (mgr.14195) 221 : cluster [DBG] pgmap v132: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 884 B/s wr, 2 op/s 2026-03-10T07:21:27.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:27 vm05 bash[17520]: cluster 2026-03-10T07:21:26.271267+0000 mgr.vm05.wnsmpp (mgr.14195) 221 : cluster [DBG] pgmap v132: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 884 B/s wr, 2 op/s 2026-03-10T07:21:28.960 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:28 vm05 bash[17520]: audit 2026-03-10T07:21:27.679573+0000 mon.vm05 (mon.0) 829 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:28.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:28 vm05 bash[17520]: audit 2026-03-10T07:21:27.679573+0000 mon.vm05 (mon.0) 829 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:28.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:28 vm05 bash[17520]: audit 2026-03-10T07:21:27.680355+0000 mon.vm05 (mon.0) 830 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:28.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:28 vm05 bash[17520]: audit 2026-03-10T07:21:27.680355+0000 mon.vm05 (mon.0) 830 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:28.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:28 vm05 bash[17520]: cluster 2026-03-10T07:21:28.271631+0000 mgr.vm05.wnsmpp (mgr.14195) 222 : cluster [DBG] pgmap v133: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 973 B/s wr, 2 op/s 2026-03-10T07:21:28.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:28 vm05 bash[17520]: cluster 2026-03-10T07:21:28.271631+0000 mgr.vm05.wnsmpp (mgr.14195) 222 : cluster [DBG] pgmap v133: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 973 B/s wr, 2 op/s 2026-03-10T07:21:29.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:28 vm09 bash[21099]: audit 2026-03-10T07:21:27.679573+0000 mon.vm05 (mon.0) 829 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:29.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:28 vm09 bash[21099]: audit 2026-03-10T07:21:27.679573+0000 mon.vm05 (mon.0) 829 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:29.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:28 vm09 bash[21099]: audit 2026-03-10T07:21:27.680355+0000 mon.vm05 (mon.0) 830 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:29.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:28 vm09 bash[21099]: audit 2026-03-10T07:21:27.680355+0000 mon.vm05 (mon.0) 830 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:29.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:28 vm09 bash[21099]: cluster 2026-03-10T07:21:28.271631+0000 mgr.vm05.wnsmpp (mgr.14195) 222 : cluster [DBG] pgmap v133: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 973 B/s wr, 2 op/s 2026-03-10T07:21:29.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:28 vm09 bash[21099]: cluster 2026-03-10T07:21:28.271631+0000 mgr.vm05.wnsmpp (mgr.14195) 222 : cluster [DBG] pgmap v133: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 973 B/s wr, 2 op/s 2026-03-10T07:21:30.153 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:28.896655+0000 mon.vm05 (mon.0) 831 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.153 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:28.896655+0000 mon.vm05 (mon.0) 831 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.153 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:28.902266+0000 mon.vm05 (mon.0) 832 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:28.902266+0000 mon.vm05 (mon.0) 832 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.548369+0000 mon.vm05 (mon.0) 833 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.548369+0000 mon.vm05 (mon.0) 833 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.554863+0000 mon.vm05 (mon.0) 834 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.554863+0000 mon.vm05 (mon.0) 834 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.556036+0000 mon.vm05 (mon.0) 835 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.556036+0000 mon.vm05 (mon.0) 835 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.556935+0000 mon.vm05 (mon.0) 836 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.556935+0000 mon.vm05 (mon.0) 836 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.561513+0000 mon.vm05 (mon.0) 837 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.561513+0000 mon.vm05 (mon.0) 837 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.563088+0000 mon.vm05 (mon.0) 838 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.563088+0000 mon.vm05 (mon.0) 838 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.569561+0000 mon.vm05 (mon.0) 839 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.154 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:29 vm05 bash[17520]: audit 2026-03-10T07:21:29.569561+0000 mon.vm05 (mon.0) 839 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.158 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:28.896655+0000 mon.vm05 (mon.0) 831 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:28.896655+0000 mon.vm05 (mon.0) 831 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:28.902266+0000 mon.vm05 (mon.0) 832 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:28.902266+0000 mon.vm05 (mon.0) 832 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.548369+0000 mon.vm05 (mon.0) 833 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.548369+0000 mon.vm05 (mon.0) 833 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.554863+0000 mon.vm05 (mon.0) 834 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.554863+0000 mon.vm05 (mon.0) 834 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.556036+0000 mon.vm05 (mon.0) 835 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.556036+0000 mon.vm05 (mon.0) 835 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.556935+0000 mon.vm05 (mon.0) 836 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.556935+0000 mon.vm05 (mon.0) 836 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.561513+0000 mon.vm05 (mon.0) 837 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.561513+0000 mon.vm05 (mon.0) 837 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.563088+0000 mon.vm05 (mon.0) 838 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.563088+0000 mon.vm05 (mon.0) 838 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.569561+0000 mon.vm05 (mon.0) 839 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.173 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:29 vm09 bash[21099]: audit 2026-03-10T07:21:29.569561+0000 mon.vm05 (mon.0) 839 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:30.551 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:21:30.551 INFO:teuthology.orchestra.run.vm05.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T07:16:57.250451Z", "last_refresh": "2026-03-10T07:21:28.887719Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:17:50.632506Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T07:16:55.873736Z", "last_refresh": "2026-03-10T07:21:28.887605Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:51.461998Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T07:16:55.599449Z", "last_refresh": "2026-03-10T07:21:28.887516Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T07:16:56.598903Z", "last_refresh": "2026-03-10T07:21:28.887877Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:21:16.693737Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9002, "virtual_ip": "12.12.1.105/22"}, "status": {"created": "2026-03-10T07:20:38.184040Z", "last_refresh": "2026-03-10T07:21:28.887692Z", "ports": [2049, 9002], "running": 4, "size": 4, "virtual_ip": "12.12.1.105/22"}}, {"events": ["2026-03-10T07:20:39.165253Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T07:20:33.161016Z", "last_refresh": "2026-03-10T07:21:28.887747Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.987383Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T07:16:55.321760Z", "last_refresh": "2026-03-10T07:21:28.887633Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:54.139000Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm05:192.168.123.105=vm05", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T07:17:37.480725Z", "last_refresh": "2026-03-10T07:21:28.887852Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:21:29.569858Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T07:20:44.641942Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\"", "2026-03-10T07:20:44.691372Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 2}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 12049}, "status": {"created": "2026-03-10T07:20:38.179287Z", "last_refresh": "2026-03-10T07:21:28.887663Z", "ports": [12049], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.164679Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T07:16:56.934008Z", "last_refresh": "2026-03-10T07:21:28.887578Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:18:12.645291Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T07:18:12.640395Z", "last_refresh": "2026-03-10T07:21:28.887465Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T07:17:54.141548Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T07:16:56.184682Z", "ports": [9095], "running": 0, "size": 1}}] 2026-03-10T07:21:30.660 INFO:tasks.cephadm:nfs.foo has 2/2 2026-03-10T07:21:30.660 INFO:teuthology.run_tasks:Running task cephadm.wait_for_service... 2026-03-10T07:21:30.663 INFO:tasks.cephadm:Waiting for ceph service ingress.nfs.foo to start (timeout 300)... 2026-03-10T07:21:30.663 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch ls -f json 2026-03-10T07:21:31.603 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: cephadm 2026-03-10T07:21:29.579934+0000 mgr.vm05.wnsmpp (mgr.14195) 223 : cephadm [INF] Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: cephadm 2026-03-10T07:21:29.579934+0000 mgr.vm05.wnsmpp (mgr.14195) 223 : cephadm [INF] Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: cephadm 2026-03-10T07:21:29.738041+0000 mgr.vm05.wnsmpp (mgr.14195) 224 : cephadm [INF] Reconfiguring daemon prometheus.vm05 on vm05 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: cephadm 2026-03-10T07:21:29.738041+0000 mgr.vm05.wnsmpp (mgr.14195) 224 : cephadm [INF] Reconfiguring daemon prometheus.vm05 on vm05 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: cluster 2026-03-10T07:21:30.272091+0000 mgr.vm05.wnsmpp (mgr.14195) 225 : cluster [DBG] pgmap v134: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: cluster 2026-03-10T07:21:30.272091+0000 mgr.vm05.wnsmpp (mgr.14195) 225 : cluster [DBG] pgmap v134: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: audit 2026-03-10T07:21:30.407457+0000 mon.vm05 (mon.0) 840 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: audit 2026-03-10T07:21:30.407457+0000 mon.vm05 (mon.0) 840 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: audit 2026-03-10T07:21:30.414519+0000 mon.vm05 (mon.0) 841 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: audit 2026-03-10T07:21:30.414519+0000 mon.vm05 (mon.0) 841 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: cephadm 2026-03-10T07:21:30.419528+0000 mgr.vm05.wnsmpp (mgr.14195) 226 : cephadm [INF] Reconfiguring haproxy.nfs.foo.vm05.yhprte (dependencies changed)... 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: cephadm 2026-03-10T07:21:30.419528+0000 mgr.vm05.wnsmpp (mgr.14195) 226 : cephadm [INF] Reconfiguring haproxy.nfs.foo.vm05.yhprte (dependencies changed)... 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: cephadm 2026-03-10T07:21:30.420514+0000 mgr.vm05.wnsmpp (mgr.14195) 227 : cephadm [INF] Reconfiguring daemon haproxy.nfs.foo.vm05.yhprte on vm05 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: cephadm 2026-03-10T07:21:30.420514+0000 mgr.vm05.wnsmpp (mgr.14195) 227 : cephadm [INF] Reconfiguring daemon haproxy.nfs.foo.vm05.yhprte on vm05 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: audit 2026-03-10T07:21:31.054848+0000 mon.vm05 (mon.0) 842 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: audit 2026-03-10T07:21:31.054848+0000 mon.vm05 (mon.0) 842 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: audit 2026-03-10T07:21:31.059696+0000 mon.vm05 (mon.0) 843 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:31 vm09 bash[21099]: audit 2026-03-10T07:21:31.059696+0000 mon.vm05 (mon.0) 843 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: cephadm 2026-03-10T07:21:29.579934+0000 mgr.vm05.wnsmpp (mgr.14195) 223 : cephadm [INF] Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: cephadm 2026-03-10T07:21:29.579934+0000 mgr.vm05.wnsmpp (mgr.14195) 223 : cephadm [INF] Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: cephadm 2026-03-10T07:21:29.738041+0000 mgr.vm05.wnsmpp (mgr.14195) 224 : cephadm [INF] Reconfiguring daemon prometheus.vm05 on vm05 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: cephadm 2026-03-10T07:21:29.738041+0000 mgr.vm05.wnsmpp (mgr.14195) 224 : cephadm [INF] Reconfiguring daemon prometheus.vm05 on vm05 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: cluster 2026-03-10T07:21:30.272091+0000 mgr.vm05.wnsmpp (mgr.14195) 225 : cluster [DBG] pgmap v134: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: cluster 2026-03-10T07:21:30.272091+0000 mgr.vm05.wnsmpp (mgr.14195) 225 : cluster [DBG] pgmap v134: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: audit 2026-03-10T07:21:30.407457+0000 mon.vm05 (mon.0) 840 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: audit 2026-03-10T07:21:30.407457+0000 mon.vm05 (mon.0) 840 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: audit 2026-03-10T07:21:30.414519+0000 mon.vm05 (mon.0) 841 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: audit 2026-03-10T07:21:30.414519+0000 mon.vm05 (mon.0) 841 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: cephadm 2026-03-10T07:21:30.419528+0000 mgr.vm05.wnsmpp (mgr.14195) 226 : cephadm [INF] Reconfiguring haproxy.nfs.foo.vm05.yhprte (dependencies changed)... 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: cephadm 2026-03-10T07:21:30.419528+0000 mgr.vm05.wnsmpp (mgr.14195) 226 : cephadm [INF] Reconfiguring haproxy.nfs.foo.vm05.yhprte (dependencies changed)... 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: cephadm 2026-03-10T07:21:30.420514+0000 mgr.vm05.wnsmpp (mgr.14195) 227 : cephadm [INF] Reconfiguring daemon haproxy.nfs.foo.vm05.yhprte on vm05 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: cephadm 2026-03-10T07:21:30.420514+0000 mgr.vm05.wnsmpp (mgr.14195) 227 : cephadm [INF] Reconfiguring daemon haproxy.nfs.foo.vm05.yhprte on vm05 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: audit 2026-03-10T07:21:31.054848+0000 mon.vm05 (mon.0) 842 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: audit 2026-03-10T07:21:31.054848+0000 mon.vm05 (mon.0) 842 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: audit 2026-03-10T07:21:31.059696+0000 mon.vm05 (mon.0) 843 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:31 vm05 bash[17520]: audit 2026-03-10T07:21:31.059696+0000 mon.vm05 (mon.0) 843 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: audit 2026-03-10T07:21:30.543575+0000 mgr.vm05.wnsmpp (mgr.14195) 228 : audit [DBG] from='client.14566 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: audit 2026-03-10T07:21:30.543575+0000 mgr.vm05.wnsmpp (mgr.14195) 228 : audit [DBG] from='client.14566 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: cephadm 2026-03-10T07:21:31.060822+0000 mgr.vm05.wnsmpp (mgr.14195) 229 : cephadm [INF] Reconfiguring haproxy.nfs.foo.vm09.etnbzh (dependencies changed)... 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: cephadm 2026-03-10T07:21:31.060822+0000 mgr.vm05.wnsmpp (mgr.14195) 229 : cephadm [INF] Reconfiguring haproxy.nfs.foo.vm09.etnbzh (dependencies changed)... 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: cephadm 2026-03-10T07:21:31.061727+0000 mgr.vm05.wnsmpp (mgr.14195) 230 : cephadm [INF] Reconfiguring daemon haproxy.nfs.foo.vm09.etnbzh on vm09 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: cephadm 2026-03-10T07:21:31.061727+0000 mgr.vm05.wnsmpp (mgr.14195) 230 : cephadm [INF] Reconfiguring daemon haproxy.nfs.foo.vm09.etnbzh on vm09 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: audit 2026-03-10T07:21:31.561571+0000 mon.vm05 (mon.0) 844 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: audit 2026-03-10T07:21:31.561571+0000 mon.vm05 (mon.0) 844 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: audit 2026-03-10T07:21:31.565694+0000 mon.vm05 (mon.0) 845 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: audit 2026-03-10T07:21:31.565694+0000 mon.vm05 (mon.0) 845 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: audit 2026-03-10T07:21:31.568307+0000 mon.vm05 (mon.0) 846 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: audit 2026-03-10T07:21:31.568307+0000 mon.vm05 (mon.0) 846 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: audit 2026-03-10T07:21:31.596967+0000 mon.vm05 (mon.0) 847 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:32.673 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:32 vm09 bash[21099]: audit 2026-03-10T07:21:31.596967+0000 mon.vm05 (mon.0) 847 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:32.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: audit 2026-03-10T07:21:30.543575+0000 mgr.vm05.wnsmpp (mgr.14195) 228 : audit [DBG] from='client.14566 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: audit 2026-03-10T07:21:30.543575+0000 mgr.vm05.wnsmpp (mgr.14195) 228 : audit [DBG] from='client.14566 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: cephadm 2026-03-10T07:21:31.060822+0000 mgr.vm05.wnsmpp (mgr.14195) 229 : cephadm [INF] Reconfiguring haproxy.nfs.foo.vm09.etnbzh (dependencies changed)... 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: cephadm 2026-03-10T07:21:31.060822+0000 mgr.vm05.wnsmpp (mgr.14195) 229 : cephadm [INF] Reconfiguring haproxy.nfs.foo.vm09.etnbzh (dependencies changed)... 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: cephadm 2026-03-10T07:21:31.061727+0000 mgr.vm05.wnsmpp (mgr.14195) 230 : cephadm [INF] Reconfiguring daemon haproxy.nfs.foo.vm09.etnbzh on vm09 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: cephadm 2026-03-10T07:21:31.061727+0000 mgr.vm05.wnsmpp (mgr.14195) 230 : cephadm [INF] Reconfiguring daemon haproxy.nfs.foo.vm09.etnbzh on vm09 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: audit 2026-03-10T07:21:31.561571+0000 mon.vm05 (mon.0) 844 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: audit 2026-03-10T07:21:31.561571+0000 mon.vm05 (mon.0) 844 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: audit 2026-03-10T07:21:31.565694+0000 mon.vm05 (mon.0) 845 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: audit 2026-03-10T07:21:31.565694+0000 mon.vm05 (mon.0) 845 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: audit 2026-03-10T07:21:31.568307+0000 mon.vm05 (mon.0) 846 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: audit 2026-03-10T07:21:31.568307+0000 mon.vm05 (mon.0) 846 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: audit 2026-03-10T07:21:31.596967+0000 mon.vm05 (mon.0) 847 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:32 vm05 bash[17520]: audit 2026-03-10T07:21:31.596967+0000 mon.vm05 (mon.0) 847 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:33.672 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:33 vm09 bash[21099]: audit 2026-03-10T07:21:31.568553+0000 mgr.vm05.wnsmpp (mgr.14195) 231 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:21:33.672 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:33 vm09 bash[21099]: audit 2026-03-10T07:21:31.568553+0000 mgr.vm05.wnsmpp (mgr.14195) 231 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:21:33.672 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:33 vm09 bash[21099]: cluster 2026-03-10T07:21:32.272445+0000 mgr.vm05.wnsmpp (mgr.14195) 232 : cluster [DBG] pgmap v135: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-10T07:21:33.672 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:33 vm09 bash[21099]: cluster 2026-03-10T07:21:32.272445+0000 mgr.vm05.wnsmpp (mgr.14195) 232 : cluster [DBG] pgmap v135: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-10T07:21:33.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:33 vm05 bash[17520]: audit 2026-03-10T07:21:31.568553+0000 mgr.vm05.wnsmpp (mgr.14195) 231 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:21:33.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:33 vm05 bash[17520]: audit 2026-03-10T07:21:31.568553+0000 mgr.vm05.wnsmpp (mgr.14195) 231 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T07:21:33.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:33 vm05 bash[17520]: cluster 2026-03-10T07:21:32.272445+0000 mgr.vm05.wnsmpp (mgr.14195) 232 : cluster [DBG] pgmap v135: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-10T07:21:33.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:33 vm05 bash[17520]: cluster 2026-03-10T07:21:32.272445+0000 mgr.vm05.wnsmpp (mgr.14195) 232 : cluster [DBG] pgmap v135: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-10T07:21:35.387 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:21:35.661 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:21:35.661 INFO:teuthology.orchestra.run.vm05.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T07:16:57.250451Z", "last_refresh": "2026-03-10T07:21:28.887719Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:17:50.632506Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T07:16:55.873736Z", "last_refresh": "2026-03-10T07:21:28.887605Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:51.461998Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T07:16:55.599449Z", "last_refresh": "2026-03-10T07:21:28.887516Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T07:16:56.598903Z", "last_refresh": "2026-03-10T07:21:28.887877Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:21:16.693737Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9002, "virtual_ip": "12.12.1.105/22"}, "status": {"created": "2026-03-10T07:20:38.184040Z", "ports": [2049, 9002], "running": 2, "size": 4, "virtual_ip": "12.12.1.105/22"}}, {"events": ["2026-03-10T07:20:39.165253Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T07:20:33.161016Z", "last_refresh": "2026-03-10T07:21:28.887747Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.987383Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T07:16:55.321760Z", "last_refresh": "2026-03-10T07:21:28.887633Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:54.139000Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm05:192.168.123.105=vm05", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T07:17:37.480725Z", "last_refresh": "2026-03-10T07:21:28.887852Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:21:29.569858Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T07:20:44.641942Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\"", "2026-03-10T07:20:44.691372Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 2}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 12049}, "status": {"created": "2026-03-10T07:20:38.179287Z", "last_refresh": "2026-03-10T07:21:28.887663Z", "ports": [12049], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.164679Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T07:16:56.934008Z", "last_refresh": "2026-03-10T07:21:28.887578Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:18:12.645291Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T07:18:12.640395Z", "last_refresh": "2026-03-10T07:21:28.887465Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T07:17:54.141548Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T07:16:56.184682Z", "ports": [9095], "running": 0, "size": 1}}] 2026-03-10T07:21:35.672 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:35 vm05 bash[17520]: cluster 2026-03-10T07:21:34.272869+0000 mgr.vm05.wnsmpp (mgr.14195) 233 : cluster [DBG] pgmap v136: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:21:35.672 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:35 vm05 bash[17520]: cluster 2026-03-10T07:21:34.272869+0000 mgr.vm05.wnsmpp (mgr.14195) 233 : cluster [DBG] pgmap v136: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:21:35.672 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:35 vm09 bash[21099]: cluster 2026-03-10T07:21:34.272869+0000 mgr.vm05.wnsmpp (mgr.14195) 233 : cluster [DBG] pgmap v136: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:21:35.672 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:35 vm09 bash[21099]: cluster 2026-03-10T07:21:34.272869+0000 mgr.vm05.wnsmpp (mgr.14195) 233 : cluster [DBG] pgmap v136: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:21:35.733 INFO:tasks.cephadm:ingress.nfs.foo has 2/4 2026-03-10T07:21:36.733 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph orch ls -f json 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:35.659503+0000 mgr.vm05.wnsmpp (mgr.14195) 234 : audit [DBG] from='client.14570 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:35.659503+0000 mgr.vm05.wnsmpp (mgr.14195) 234 : audit [DBG] from='client.14570 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: cluster 2026-03-10T07:21:36.273338+0000 mgr.vm05.wnsmpp (mgr.14195) 235 : cluster [DBG] pgmap v137: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: cluster 2026-03-10T07:21:36.273338+0000 mgr.vm05.wnsmpp (mgr.14195) 235 : cluster [DBG] pgmap v137: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:36.925055+0000 mon.vm05 (mon.0) 848 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:36.925055+0000 mon.vm05 (mon.0) 848 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:36.932623+0000 mon.vm05 (mon.0) 849 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:36.932623+0000 mon.vm05 (mon.0) 849 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.322448+0000 mon.vm05 (mon.0) 850 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.322448+0000 mon.vm05 (mon.0) 850 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.328355+0000 mon.vm05 (mon.0) 851 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.328355+0000 mon.vm05 (mon.0) 851 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.329251+0000 mon.vm05 (mon.0) 852 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.329251+0000 mon.vm05 (mon.0) 852 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.329746+0000 mon.vm05 (mon.0) 853 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.329746+0000 mon.vm05 (mon.0) 853 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.333259+0000 mon.vm05 (mon.0) 854 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.333259+0000 mon.vm05 (mon.0) 854 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.334864+0000 mon.vm05 (mon.0) 855 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.334864+0000 mon.vm05 (mon.0) 855 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.340058+0000 mon.vm05 (mon.0) 856 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:37 vm05 bash[17520]: audit 2026-03-10T07:21:37.340058+0000 mon.vm05 (mon.0) 856 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:35.659503+0000 mgr.vm05.wnsmpp (mgr.14195) 234 : audit [DBG] from='client.14570 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:35.659503+0000 mgr.vm05.wnsmpp (mgr.14195) 234 : audit [DBG] from='client.14570 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: cluster 2026-03-10T07:21:36.273338+0000 mgr.vm05.wnsmpp (mgr.14195) 235 : cluster [DBG] pgmap v137: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: cluster 2026-03-10T07:21:36.273338+0000 mgr.vm05.wnsmpp (mgr.14195) 235 : cluster [DBG] pgmap v137: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.6 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:36.925055+0000 mon.vm05 (mon.0) 848 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:36.925055+0000 mon.vm05 (mon.0) 848 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:36.932623+0000 mon.vm05 (mon.0) 849 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:36.932623+0000 mon.vm05 (mon.0) 849 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.322448+0000 mon.vm05 (mon.0) 850 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.322448+0000 mon.vm05 (mon.0) 850 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.328355+0000 mon.vm05 (mon.0) 851 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.328355+0000 mon.vm05 (mon.0) 851 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.329251+0000 mon.vm05 (mon.0) 852 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.329251+0000 mon.vm05 (mon.0) 852 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.329746+0000 mon.vm05 (mon.0) 853 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.329746+0000 mon.vm05 (mon.0) 853 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.333259+0000 mon.vm05 (mon.0) 854 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.333259+0000 mon.vm05 (mon.0) 854 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.334864+0000 mon.vm05 (mon.0) 855 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.334864+0000 mon.vm05 (mon.0) 855 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.340058+0000 mon.vm05 (mon.0) 856 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:37.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:37 vm09 bash[21099]: audit 2026-03-10T07:21:37.340058+0000 mon.vm05 (mon.0) 856 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:39.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:39 vm05 bash[17520]: cluster 2026-03-10T07:21:38.273720+0000 mgr.vm05.wnsmpp (mgr.14195) 236 : cluster [DBG] pgmap v138: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s 2026-03-10T07:21:39.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:39 vm05 bash[17520]: cluster 2026-03-10T07:21:38.273720+0000 mgr.vm05.wnsmpp (mgr.14195) 236 : cluster [DBG] pgmap v138: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s 2026-03-10T07:21:39.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:39 vm09 bash[21099]: cluster 2026-03-10T07:21:38.273720+0000 mgr.vm05.wnsmpp (mgr.14195) 236 : cluster [DBG] pgmap v138: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s 2026-03-10T07:21:39.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:39 vm09 bash[21099]: cluster 2026-03-10T07:21:38.273720+0000 mgr.vm05.wnsmpp (mgr.14195) 236 : cluster [DBG] pgmap v138: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s 2026-03-10T07:21:40.467 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:21:40.734 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T07:21:40.734 INFO:teuthology.orchestra.run.vm05.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T07:16:57.250451Z", "last_refresh": "2026-03-10T07:21:37.316253Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:17:50.632506Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T07:16:55.873736Z", "last_refresh": "2026-03-10T07:21:36.918763Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:51.461998Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T07:16:55.599449Z", "last_refresh": "2026-03-10T07:21:36.918853Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T07:16:56.598903Z", "last_refresh": "2026-03-10T07:21:37.316418Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T07:21:16.693737Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9002, "virtual_ip": "12.12.1.105/22"}, "status": {"created": "2026-03-10T07:20:38.184040Z", "last_refresh": "2026-03-10T07:21:36.918700Z", "ports": [2049, 9002], "running": 4, "size": 4, "virtual_ip": "12.12.1.105/22"}}, {"events": ["2026-03-10T07:20:39.165253Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T07:20:33.161016Z", "last_refresh": "2026-03-10T07:21:36.918941Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.987383Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T07:16:55.321760Z", "last_refresh": "2026-03-10T07:21:36.918912Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:54.139000Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm05:192.168.123.105=vm05", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T07:17:37.480725Z", "last_refresh": "2026-03-10T07:21:36.918821Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T07:21:37.340258Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T07:20:44.641942Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm05.adjxhw on vm05: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\"", "2026-03-10T07:20:44.691372Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.1.0.vm09.pgwkva on vm09: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 2}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 12049}, "status": {"created": "2026-03-10T07:20:38.179287Z", "last_refresh": "2026-03-10T07:21:36.918883Z", "ports": [12049], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:17:52.164679Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T07:16:56.934008Z", "last_refresh": "2026-03-10T07:21:36.919203Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T07:18:12.645291Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T07:18:12.640395Z", "last_refresh": "2026-03-10T07:21:36.918645Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T07:17:54.141548Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T07:16:56.184682Z", "last_refresh": "2026-03-10T07:21:37.316308Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T07:21:40.808 INFO:tasks.cephadm:ingress.nfs.foo has 4/4 2026-03-10T07:21:40.808 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T07:21:40.811 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-10T07:21:40.811 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph nfs export create cephfs --fsname foofs --cluster-id foo --pseudo-path /fake' 2026-03-10T07:21:41.710 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:41 vm05 bash[17520]: cluster 2026-03-10T07:21:40.274181+0000 mgr.vm05.wnsmpp (mgr.14195) 237 : cluster [DBG] pgmap v139: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s 2026-03-10T07:21:41.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:41 vm05 bash[17520]: cluster 2026-03-10T07:21:40.274181+0000 mgr.vm05.wnsmpp (mgr.14195) 237 : cluster [DBG] pgmap v139: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s 2026-03-10T07:21:41.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:41 vm09 bash[21099]: cluster 2026-03-10T07:21:40.274181+0000 mgr.vm05.wnsmpp (mgr.14195) 237 : cluster [DBG] pgmap v139: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s 2026-03-10T07:21:41.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:41 vm09 bash[21099]: cluster 2026-03-10T07:21:40.274181+0000 mgr.vm05.wnsmpp (mgr.14195) 237 : cluster [DBG] pgmap v139: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s 2026-03-10T07:21:42.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:42 vm05 bash[17520]: audit 2026-03-10T07:21:40.732522+0000 mgr.vm05.wnsmpp (mgr.14195) 238 : audit [DBG] from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:42.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:42 vm05 bash[17520]: audit 2026-03-10T07:21:40.732522+0000 mgr.vm05.wnsmpp (mgr.14195) 238 : audit [DBG] from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:42.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:42 vm09 bash[21099]: audit 2026-03-10T07:21:40.732522+0000 mgr.vm05.wnsmpp (mgr.14195) 238 : audit [DBG] from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:42.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:42 vm09 bash[21099]: audit 2026-03-10T07:21:40.732522+0000 mgr.vm05.wnsmpp (mgr.14195) 238 : audit [DBG] from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:21:43.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:43 vm05 bash[17520]: cluster 2026-03-10T07:21:42.274573+0000 mgr.vm05.wnsmpp (mgr.14195) 239 : cluster [DBG] pgmap v140: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:21:43.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:43 vm05 bash[17520]: cluster 2026-03-10T07:21:42.274573+0000 mgr.vm05.wnsmpp (mgr.14195) 239 : cluster [DBG] pgmap v140: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:21:43.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:43 vm05 bash[17520]: audit 2026-03-10T07:21:42.676016+0000 mon.vm05 (mon.0) 857 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:43.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:43 vm05 bash[17520]: audit 2026-03-10T07:21:42.676016+0000 mon.vm05 (mon.0) 857 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:43.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:43 vm09 bash[21099]: cluster 2026-03-10T07:21:42.274573+0000 mgr.vm05.wnsmpp (mgr.14195) 239 : cluster [DBG] pgmap v140: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:21:43.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:43 vm09 bash[21099]: cluster 2026-03-10T07:21:42.274573+0000 mgr.vm05.wnsmpp (mgr.14195) 239 : cluster [DBG] pgmap v140: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:21:43.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:43 vm09 bash[21099]: audit 2026-03-10T07:21:42.676016+0000 mon.vm05 (mon.0) 857 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:43.922 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:43 vm09 bash[21099]: audit 2026-03-10T07:21:42.676016+0000 mon.vm05 (mon.0) 857 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:44.505 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:21:44.850 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T07:21:44.850 INFO:teuthology.orchestra.run.vm05.stdout: "bind": "/fake", 2026-03-10T07:21:44.850 INFO:teuthology.orchestra.run.vm05.stdout: "cluster": "foo", 2026-03-10T07:21:44.850 INFO:teuthology.orchestra.run.vm05.stdout: "fs": "foofs", 2026-03-10T07:21:44.850 INFO:teuthology.orchestra.run.vm05.stdout: "mode": "RW", 2026-03-10T07:21:44.850 INFO:teuthology.orchestra.run.vm05.stdout: "path": "/" 2026-03-10T07:21:44.850 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T07:21:44.918 INFO:teuthology.run_tasks:Running task vip.exec... 2026-03-10T07:21:44.921 INFO:tasks.vip:Running commands on role host.a host ubuntu@vm05.local 2026-03-10T07:21:44.921 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'mkdir /mnt/foo' 2026-03-10T07:21:44.928 INFO:teuthology.orchestra.run.vm05.stderr:+ mkdir /mnt/foo 2026-03-10T07:21:44.929 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'sleep 5' 2026-03-10T07:21:44.980 INFO:teuthology.orchestra.run.vm05.stderr:+ sleep 5 2026-03-10T07:21:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:45 vm05 bash[17520]: cluster 2026-03-10T07:21:44.275017+0000 mgr.vm05.wnsmpp (mgr.14195) 240 : cluster [DBG] pgmap v141: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:21:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:45 vm05 bash[17520]: cluster 2026-03-10T07:21:44.275017+0000 mgr.vm05.wnsmpp (mgr.14195) 240 : cluster [DBG] pgmap v141: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:21:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:45 vm05 bash[17520]: audit 2026-03-10T07:21:44.834777+0000 mon.vm05 (mon.0) 858 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]: dispatch 2026-03-10T07:21:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:45 vm05 bash[17520]: audit 2026-03-10T07:21:44.834777+0000 mon.vm05 (mon.0) 858 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]: dispatch 2026-03-10T07:21:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:45 vm05 bash[17520]: audit 2026-03-10T07:21:44.837121+0000 mon.vm05 (mon.0) 859 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]': finished 2026-03-10T07:21:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:45 vm05 bash[17520]: audit 2026-03-10T07:21:44.837121+0000 mon.vm05 (mon.0) 859 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]': finished 2026-03-10T07:21:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:45 vm05 bash[17520]: audit 2026-03-10T07:21:44.839511+0000 mon.vm05 (mon.0) 860 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]: dispatch 2026-03-10T07:21:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:45 vm05 bash[17520]: audit 2026-03-10T07:21:44.839511+0000 mon.vm05 (mon.0) 860 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]: dispatch 2026-03-10T07:21:45.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:45 vm09 bash[21099]: cluster 2026-03-10T07:21:44.275017+0000 mgr.vm05.wnsmpp (mgr.14195) 240 : cluster [DBG] pgmap v141: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:21:45.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:45 vm09 bash[21099]: cluster 2026-03-10T07:21:44.275017+0000 mgr.vm05.wnsmpp (mgr.14195) 240 : cluster [DBG] pgmap v141: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:21:45.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:45 vm09 bash[21099]: audit 2026-03-10T07:21:44.834777+0000 mon.vm05 (mon.0) 858 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]: dispatch 2026-03-10T07:21:45.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:45 vm09 bash[21099]: audit 2026-03-10T07:21:44.834777+0000 mon.vm05 (mon.0) 858 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]: dispatch 2026-03-10T07:21:45.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:45 vm09 bash[21099]: audit 2026-03-10T07:21:44.837121+0000 mon.vm05 (mon.0) 859 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]': finished 2026-03-10T07:21:45.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:45 vm09 bash[21099]: audit 2026-03-10T07:21:44.837121+0000 mon.vm05 (mon.0) 859 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]': finished 2026-03-10T07:21:45.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:45 vm09 bash[21099]: audit 2026-03-10T07:21:44.839511+0000 mon.vm05 (mon.0) 860 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]: dispatch 2026-03-10T07:21:45.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:45 vm09 bash[21099]: audit 2026-03-10T07:21:44.839511+0000 mon.vm05 (mon.0) 860 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]: dispatch 2026-03-10T07:21:46.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:46 vm05 bash[17520]: audit 2026-03-10T07:21:44.782737+0000 mgr.vm05.wnsmpp (mgr.14195) 241 : audit [DBG] from='client.14578 -' entity='client.admin' cmd=[{"prefix": "nfs export create cephfs", "fsname": "foofs", "cluster_id": "foo", "pseudo_path": "/fake", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:46.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:46 vm05 bash[17520]: audit 2026-03-10T07:21:44.782737+0000 mgr.vm05.wnsmpp (mgr.14195) 241 : audit [DBG] from='client.14578 -' entity='client.admin' cmd=[{"prefix": "nfs export create cephfs", "fsname": "foofs", "cluster_id": "foo", "pseudo_path": "/fake", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:46.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:46 vm09 bash[21099]: audit 2026-03-10T07:21:44.782737+0000 mgr.vm05.wnsmpp (mgr.14195) 241 : audit [DBG] from='client.14578 -' entity='client.admin' cmd=[{"prefix": "nfs export create cephfs", "fsname": "foofs", "cluster_id": "foo", "pseudo_path": "/fake", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:46.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:46 vm09 bash[21099]: audit 2026-03-10T07:21:44.782737+0000 mgr.vm05.wnsmpp (mgr.14195) 241 : audit [DBG] from='client.14578 -' entity='client.admin' cmd=[{"prefix": "nfs export create cephfs", "fsname": "foofs", "cluster_id": "foo", "pseudo_path": "/fake", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:47.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:47 vm05 bash[17520]: cluster 2026-03-10T07:21:46.275405+0000 mgr.vm05.wnsmpp (mgr.14195) 242 : cluster [DBG] pgmap v142: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:21:47.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:47 vm05 bash[17520]: cluster 2026-03-10T07:21:46.275405+0000 mgr.vm05.wnsmpp (mgr.14195) 242 : cluster [DBG] pgmap v142: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:21:47.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:47 vm09 bash[21099]: cluster 2026-03-10T07:21:46.275405+0000 mgr.vm05.wnsmpp (mgr.14195) 242 : cluster [DBG] pgmap v142: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:21:47.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:47 vm09 bash[21099]: cluster 2026-03-10T07:21:46.275405+0000 mgr.vm05.wnsmpp (mgr.14195) 242 : cluster [DBG] pgmap v142: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:21:48.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:48 vm09 bash[21099]: cluster 2026-03-10T07:21:47.484172+0000 mon.vm05 (mon.0) 861 : cluster [DBG] mgrmap e19: vm05.wnsmpp(active, since 4m), standbys: vm09.rfdvwa 2026-03-10T07:21:48.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:48 vm09 bash[21099]: cluster 2026-03-10T07:21:47.484172+0000 mon.vm05 (mon.0) 861 : cluster [DBG] mgrmap e19: vm05.wnsmpp(active, since 4m), standbys: vm09.rfdvwa 2026-03-10T07:21:48.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:48 vm09 bash[21099]: cluster 2026-03-10T07:21:48.275803+0000 mgr.vm05.wnsmpp (mgr.14195) 243 : cluster [DBG] pgmap v143: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s 2026-03-10T07:21:48.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:48 vm09 bash[21099]: cluster 2026-03-10T07:21:48.275803+0000 mgr.vm05.wnsmpp (mgr.14195) 243 : cluster [DBG] pgmap v143: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s 2026-03-10T07:21:48.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:48 vm05 bash[17520]: cluster 2026-03-10T07:21:47.484172+0000 mon.vm05 (mon.0) 861 : cluster [DBG] mgrmap e19: vm05.wnsmpp(active, since 4m), standbys: vm09.rfdvwa 2026-03-10T07:21:48.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:48 vm05 bash[17520]: cluster 2026-03-10T07:21:47.484172+0000 mon.vm05 (mon.0) 861 : cluster [DBG] mgrmap e19: vm05.wnsmpp(active, since 4m), standbys: vm09.rfdvwa 2026-03-10T07:21:48.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:48 vm05 bash[17520]: cluster 2026-03-10T07:21:48.275803+0000 mgr.vm05.wnsmpp (mgr.14195) 243 : cluster [DBG] pgmap v143: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s 2026-03-10T07:21:48.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:48 vm05 bash[17520]: cluster 2026-03-10T07:21:48.275803+0000 mgr.vm05.wnsmpp (mgr.14195) 243 : cluster [DBG] pgmap v143: 97 pgs: 97 active+clean; 451 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s 2026-03-10T07:21:49.982 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'mount -t nfs 12.12.1.105:/fake /mnt/foo' 2026-03-10T07:21:50.036 INFO:teuthology.orchestra.run.vm05.stderr:+ mount -t nfs 12.12.1.105:/fake /mnt/foo 2026-03-10T07:21:50.172 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'echo test > /mnt/foo/testfile' 2026-03-10T07:21:50.220 INFO:teuthology.orchestra.run.vm05.stderr:+ echo test 2026-03-10T07:21:50.230 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c sync 2026-03-10T07:21:50.282 INFO:teuthology.orchestra.run.vm05.stderr:+ sync 2026-03-10T07:21:50.292 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T07:21:50.294 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-10T07:21:50.294 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -v /mnt/foo:/mnt/foo -- bash -c 'echo "Check with each haproxy down in turn..." 2026-03-10T07:21:50.294 DEBUG:teuthology.orchestra.run.vm05:> for haproxy in `ceph orch ps | grep ^haproxy.nfs.foo. | awk '"'"'{print $1}'"'"'`; do 2026-03-10T07:21:50.294 DEBUG:teuthology.orchestra.run.vm05:> ceph orch daemon stop $haproxy 2026-03-10T07:21:50.294 DEBUG:teuthology.orchestra.run.vm05:> while ! ceph orch ps | grep $haproxy | grep stopped; do sleep 1 ; done 2026-03-10T07:21:50.294 DEBUG:teuthology.orchestra.run.vm05:> cat /mnt/foo/testfile 2026-03-10T07:21:50.294 DEBUG:teuthology.orchestra.run.vm05:> echo $haproxy > /mnt/foo/testfile 2026-03-10T07:21:50.294 DEBUG:teuthology.orchestra.run.vm05:> sync 2026-03-10T07:21:50.294 DEBUG:teuthology.orchestra.run.vm05:> ceph orch daemon start $haproxy 2026-03-10T07:21:50.294 DEBUG:teuthology.orchestra.run.vm05:> while ! ceph orch ps | grep $haproxy | grep running; do sleep 1 ; done 2026-03-10T07:21:50.294 DEBUG:teuthology.orchestra.run.vm05:> done 2026-03-10T07:21:50.295 DEBUG:teuthology.orchestra.run.vm05:> ' 2026-03-10T07:21:51.671 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:51 vm09 bash[21099]: cluster 2026-03-10T07:21:50.276438+0000 mgr.vm05.wnsmpp (mgr.14195) 244 : cluster [DBG] pgmap v144: 97 pgs: 97 active+clean; 454 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 938 B/s wr, 1 op/s 2026-03-10T07:21:51.671 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:51 vm09 bash[21099]: cluster 2026-03-10T07:21:50.276438+0000 mgr.vm05.wnsmpp (mgr.14195) 244 : cluster [DBG] pgmap v144: 97 pgs: 97 active+clean; 454 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 938 B/s wr, 1 op/s 2026-03-10T07:21:51.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:51 vm05 bash[17520]: cluster 2026-03-10T07:21:50.276438+0000 mgr.vm05.wnsmpp (mgr.14195) 244 : cluster [DBG] pgmap v144: 97 pgs: 97 active+clean; 454 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 938 B/s wr, 1 op/s 2026-03-10T07:21:51.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:51 vm05 bash[17520]: cluster 2026-03-10T07:21:50.276438+0000 mgr.vm05.wnsmpp (mgr.14195) 244 : cluster [DBG] pgmap v144: 97 pgs: 97 active+clean; 454 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 938 B/s wr, 1 op/s 2026-03-10T07:21:53.671 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:53 vm09 bash[21099]: cluster 2026-03-10T07:21:52.276851+0000 mgr.vm05.wnsmpp (mgr.14195) 245 : cluster [DBG] pgmap v145: 97 pgs: 97 active+clean; 454 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:21:53.671 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:53 vm09 bash[21099]: cluster 2026-03-10T07:21:52.276851+0000 mgr.vm05.wnsmpp (mgr.14195) 245 : cluster [DBG] pgmap v145: 97 pgs: 97 active+clean; 454 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:21:53.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:53 vm05 bash[17520]: cluster 2026-03-10T07:21:52.276851+0000 mgr.vm05.wnsmpp (mgr.14195) 245 : cluster [DBG] pgmap v145: 97 pgs: 97 active+clean; 454 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:21:53.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:53 vm05 bash[17520]: cluster 2026-03-10T07:21:52.276851+0000 mgr.vm05.wnsmpp (mgr.14195) 245 : cluster [DBG] pgmap v145: 97 pgs: 97 active+clean; 454 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T07:21:54.995 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:21:55.103 INFO:teuthology.orchestra.run.vm05.stdout:Check with each haproxy down in turn... 2026-03-10T07:21:55.456 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled to stop haproxy.nfs.foo.vm05.yhprte on host 'vm05' 2026-03-10T07:21:55.671 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:55 vm09 bash[21099]: cluster 2026-03-10T07:21:54.277343+0000 mgr.vm05.wnsmpp (mgr.14195) 246 : cluster [DBG] pgmap v146: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:21:55.671 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:55 vm09 bash[21099]: cluster 2026-03-10T07:21:54.277343+0000 mgr.vm05.wnsmpp (mgr.14195) 246 : cluster [DBG] pgmap v146: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:21:55.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:55 vm05 bash[17520]: cluster 2026-03-10T07:21:54.277343+0000 mgr.vm05.wnsmpp (mgr.14195) 246 : cluster [DBG] pgmap v146: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:21:55.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:55 vm05 bash[17520]: cluster 2026-03-10T07:21:54.277343+0000 mgr.vm05.wnsmpp (mgr.14195) 246 : cluster [DBG] pgmap v146: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:21:56.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:56 vm05 bash[17520]: audit 2026-03-10T07:21:55.263788+0000 mgr.vm05.wnsmpp (mgr.14195) 247 : audit [DBG] from='client.14600 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:56.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:56 vm05 bash[17520]: audit 2026-03-10T07:21:55.263788+0000 mgr.vm05.wnsmpp (mgr.14195) 247 : audit [DBG] from='client.14600 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:56.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:56 vm05 bash[17520]: audit 2026-03-10T07:21:55.442557+0000 mgr.vm05.wnsmpp (mgr.14195) 248 : audit [DBG] from='client.14604 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "haproxy.nfs.foo.vm05.yhprte", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:56.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:56 vm05 bash[17520]: audit 2026-03-10T07:21:55.442557+0000 mgr.vm05.wnsmpp (mgr.14195) 248 : audit [DBG] from='client.14604 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "haproxy.nfs.foo.vm05.yhprte", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:56.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:56 vm05 bash[17520]: cephadm 2026-03-10T07:21:55.443005+0000 mgr.vm05.wnsmpp (mgr.14195) 249 : cephadm [INF] Schedule stop daemon haproxy.nfs.foo.vm05.yhprte 2026-03-10T07:21:56.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:56 vm05 bash[17520]: cephadm 2026-03-10T07:21:55.443005+0000 mgr.vm05.wnsmpp (mgr.14195) 249 : cephadm [INF] Schedule stop daemon haproxy.nfs.foo.vm05.yhprte 2026-03-10T07:21:56.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:56 vm05 bash[17520]: audit 2026-03-10T07:21:55.448705+0000 mon.vm05 (mon.0) 862 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:56.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:56 vm05 bash[17520]: audit 2026-03-10T07:21:55.448705+0000 mon.vm05 (mon.0) 862 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:56.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:56 vm05 bash[17520]: audit 2026-03-10T07:21:55.455353+0000 mon.vm05 (mon.0) 863 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:56.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:56 vm05 bash[17520]: audit 2026-03-10T07:21:55.455353+0000 mon.vm05 (mon.0) 863 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:56.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:56 vm05 bash[17520]: audit 2026-03-10T07:21:55.456496+0000 mon.vm05 (mon.0) 864 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:56.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:56 vm05 bash[17520]: audit 2026-03-10T07:21:55.456496+0000 mon.vm05 (mon.0) 864 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:56.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:56 vm09 bash[21099]: audit 2026-03-10T07:21:55.263788+0000 mgr.vm05.wnsmpp (mgr.14195) 247 : audit [DBG] from='client.14600 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:56.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:56 vm09 bash[21099]: audit 2026-03-10T07:21:55.263788+0000 mgr.vm05.wnsmpp (mgr.14195) 247 : audit [DBG] from='client.14600 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:56.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:56 vm09 bash[21099]: audit 2026-03-10T07:21:55.442557+0000 mgr.vm05.wnsmpp (mgr.14195) 248 : audit [DBG] from='client.14604 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "haproxy.nfs.foo.vm05.yhprte", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:56.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:56 vm09 bash[21099]: audit 2026-03-10T07:21:55.442557+0000 mgr.vm05.wnsmpp (mgr.14195) 248 : audit [DBG] from='client.14604 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "haproxy.nfs.foo.vm05.yhprte", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:56.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:56 vm09 bash[21099]: cephadm 2026-03-10T07:21:55.443005+0000 mgr.vm05.wnsmpp (mgr.14195) 249 : cephadm [INF] Schedule stop daemon haproxy.nfs.foo.vm05.yhprte 2026-03-10T07:21:56.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:56 vm09 bash[21099]: cephadm 2026-03-10T07:21:55.443005+0000 mgr.vm05.wnsmpp (mgr.14195) 249 : cephadm [INF] Schedule stop daemon haproxy.nfs.foo.vm05.yhprte 2026-03-10T07:21:56.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:56 vm09 bash[21099]: audit 2026-03-10T07:21:55.448705+0000 mon.vm05 (mon.0) 862 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:56.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:56 vm09 bash[21099]: audit 2026-03-10T07:21:55.448705+0000 mon.vm05 (mon.0) 862 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:56.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:56 vm09 bash[21099]: audit 2026-03-10T07:21:55.455353+0000 mon.vm05 (mon.0) 863 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:56.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:56 vm09 bash[21099]: audit 2026-03-10T07:21:55.455353+0000 mon.vm05 (mon.0) 863 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:21:56.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:56 vm09 bash[21099]: audit 2026-03-10T07:21:55.456496+0000 mon.vm05 (mon.0) 864 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:56.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:56 vm09 bash[21099]: audit 2026-03-10T07:21:55.456496+0000 mon.vm05 (mon.0) 864 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:21:57.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:57 vm05 bash[17520]: audit 2026-03-10T07:21:55.644738+0000 mgr.vm05.wnsmpp (mgr.14195) 250 : audit [DBG] from='client.14608 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:57.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:57 vm05 bash[17520]: audit 2026-03-10T07:21:55.644738+0000 mgr.vm05.wnsmpp (mgr.14195) 250 : audit [DBG] from='client.14608 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:57.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:57 vm05 bash[17520]: cluster 2026-03-10T07:21:56.277769+0000 mgr.vm05.wnsmpp (mgr.14195) 251 : cluster [DBG] pgmap v147: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:21:57.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:57 vm05 bash[17520]: cluster 2026-03-10T07:21:56.277769+0000 mgr.vm05.wnsmpp (mgr.14195) 251 : cluster [DBG] pgmap v147: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:21:57.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:57 vm09 bash[21099]: audit 2026-03-10T07:21:55.644738+0000 mgr.vm05.wnsmpp (mgr.14195) 250 : audit [DBG] from='client.14608 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:57.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:57 vm09 bash[21099]: audit 2026-03-10T07:21:55.644738+0000 mgr.vm05.wnsmpp (mgr.14195) 250 : audit [DBG] from='client.14608 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:57.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:57 vm09 bash[21099]: cluster 2026-03-10T07:21:56.277769+0000 mgr.vm05.wnsmpp (mgr.14195) 251 : cluster [DBG] pgmap v147: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:21:57.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:57 vm09 bash[21099]: cluster 2026-03-10T07:21:56.277769+0000 mgr.vm05.wnsmpp (mgr.14195) 251 : cluster [DBG] pgmap v147: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:21:58.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:58 vm09 bash[21099]: audit 2026-03-10T07:21:56.833813+0000 mgr.vm05.wnsmpp (mgr.14195) 252 : audit [DBG] from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:58.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:58 vm09 bash[21099]: audit 2026-03-10T07:21:56.833813+0000 mgr.vm05.wnsmpp (mgr.14195) 252 : audit [DBG] from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:58.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:58 vm09 bash[21099]: audit 2026-03-10T07:21:57.676566+0000 mon.vm05 (mon.0) 865 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:58.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:58 vm09 bash[21099]: audit 2026-03-10T07:21:57.676566+0000 mon.vm05 (mon.0) 865 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:58.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:58 vm09 bash[21099]: audit 2026-03-10T07:21:58.009218+0000 mgr.vm05.wnsmpp (mgr.14195) 253 : audit [DBG] from='client.14616 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:58.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:58 vm09 bash[21099]: audit 2026-03-10T07:21:58.009218+0000 mgr.vm05.wnsmpp (mgr.14195) 253 : audit [DBG] from='client.14616 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:58.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:58 vm09 bash[21099]: cluster 2026-03-10T07:21:58.278155+0000 mgr.vm05.wnsmpp (mgr.14195) 254 : cluster [DBG] pgmap v148: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:21:58.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:58 vm09 bash[21099]: cluster 2026-03-10T07:21:58.278155+0000 mgr.vm05.wnsmpp (mgr.14195) 254 : cluster [DBG] pgmap v148: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:21:58.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:58 vm05 bash[17520]: audit 2026-03-10T07:21:56.833813+0000 mgr.vm05.wnsmpp (mgr.14195) 252 : audit [DBG] from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:58.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:58 vm05 bash[17520]: audit 2026-03-10T07:21:56.833813+0000 mgr.vm05.wnsmpp (mgr.14195) 252 : audit [DBG] from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:58.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:58 vm05 bash[17520]: audit 2026-03-10T07:21:57.676566+0000 mon.vm05 (mon.0) 865 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:58.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:58 vm05 bash[17520]: audit 2026-03-10T07:21:57.676566+0000 mon.vm05 (mon.0) 865 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:21:58.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:58 vm05 bash[17520]: audit 2026-03-10T07:21:58.009218+0000 mgr.vm05.wnsmpp (mgr.14195) 253 : audit [DBG] from='client.14616 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:58.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:58 vm05 bash[17520]: audit 2026-03-10T07:21:58.009218+0000 mgr.vm05.wnsmpp (mgr.14195) 253 : audit [DBG] from='client.14616 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:58.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:58 vm05 bash[17520]: cluster 2026-03-10T07:21:58.278155+0000 mgr.vm05.wnsmpp (mgr.14195) 254 : cluster [DBG] pgmap v148: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:21:58.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:58 vm05 bash[17520]: cluster 2026-03-10T07:21:58.278155+0000 mgr.vm05.wnsmpp (mgr.14195) 254 : cluster [DBG] pgmap v148: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T07:21:59.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:59 vm09 bash[21099]: audit 2026-03-10T07:21:59.195827+0000 mgr.vm05.wnsmpp (mgr.14195) 255 : audit [DBG] from='client.14620 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:59.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:21:59 vm09 bash[21099]: audit 2026-03-10T07:21:59.195827+0000 mgr.vm05.wnsmpp (mgr.14195) 255 : audit [DBG] from='client.14620 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:59.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:59 vm05 bash[17520]: audit 2026-03-10T07:21:59.195827+0000 mgr.vm05.wnsmpp (mgr.14195) 255 : audit [DBG] from='client.14620 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:21:59.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:21:59 vm05 bash[17520]: audit 2026-03-10T07:21:59.195827+0000 mgr.vm05.wnsmpp (mgr.14195) 255 : audit [DBG] from='client.14620 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:00.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:00 vm09 bash[21099]: cluster 2026-03-10T07:22:00.278652+0000 mgr.vm05.wnsmpp (mgr.14195) 256 : cluster [DBG] pgmap v149: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.4 KiB/s wr, 2 op/s 2026-03-10T07:22:00.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:00 vm09 bash[21099]: cluster 2026-03-10T07:22:00.278652+0000 mgr.vm05.wnsmpp (mgr.14195) 256 : cluster [DBG] pgmap v149: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.4 KiB/s wr, 2 op/s 2026-03-10T07:22:00.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:00 vm09 bash[21099]: audit 2026-03-10T07:22:00.383488+0000 mgr.vm05.wnsmpp (mgr.14195) 257 : audit [DBG] from='client.14624 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:00.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:00 vm09 bash[21099]: audit 2026-03-10T07:22:00.383488+0000 mgr.vm05.wnsmpp (mgr.14195) 257 : audit [DBG] from='client.14624 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:00.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:00 vm05 bash[17520]: cluster 2026-03-10T07:22:00.278652+0000 mgr.vm05.wnsmpp (mgr.14195) 256 : cluster [DBG] pgmap v149: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.4 KiB/s wr, 2 op/s 2026-03-10T07:22:00.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:00 vm05 bash[17520]: cluster 2026-03-10T07:22:00.278652+0000 mgr.vm05.wnsmpp (mgr.14195) 256 : cluster [DBG] pgmap v149: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 1.4 KiB/s wr, 2 op/s 2026-03-10T07:22:00.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:00 vm05 bash[17520]: audit 2026-03-10T07:22:00.383488+0000 mgr.vm05.wnsmpp (mgr.14195) 257 : audit [DBG] from='client.14624 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:00.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:00 vm05 bash[17520]: audit 2026-03-10T07:22:00.383488+0000 mgr.vm05.wnsmpp (mgr.14195) 257 : audit [DBG] from='client.14624 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:02.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:00.747616+0000 mon.vm05 (mon.0) 866 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:00.747616+0000 mon.vm05 (mon.0) 866 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:00.756687+0000 mon.vm05 (mon.0) 867 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:00.756687+0000 mon.vm05 (mon.0) 867 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.129086+0000 mon.vm05 (mon.0) 868 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.129086+0000 mon.vm05 (mon.0) 868 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.135160+0000 mon.vm05 (mon.0) 869 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.135160+0000 mon.vm05 (mon.0) 869 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.137494+0000 mon.vm05 (mon.0) 870 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.137494+0000 mon.vm05 (mon.0) 870 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.138054+0000 mon.vm05 (mon.0) 871 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.138054+0000 mon.vm05 (mon.0) 871 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.144885+0000 mon.vm05 (mon.0) 872 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.144885+0000 mon.vm05 (mon.0) 872 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.146938+0000 mon.vm05 (mon.0) 873 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.146938+0000 mon.vm05 (mon.0) 873 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.156916+0000 mon.vm05 (mon.0) 874 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:01 vm09 bash[21099]: audit 2026-03-10T07:22:01.156916+0000 mon.vm05 (mon.0) 874 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:00.747616+0000 mon.vm05 (mon.0) 866 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:00.747616+0000 mon.vm05 (mon.0) 866 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:00.756687+0000 mon.vm05 (mon.0) 867 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:00.756687+0000 mon.vm05 (mon.0) 867 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.129086+0000 mon.vm05 (mon.0) 868 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.129086+0000 mon.vm05 (mon.0) 868 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.135160+0000 mon.vm05 (mon.0) 869 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.135160+0000 mon.vm05 (mon.0) 869 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.137494+0000 mon.vm05 (mon.0) 870 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.137494+0000 mon.vm05 (mon.0) 870 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.138054+0000 mon.vm05 (mon.0) 871 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.138054+0000 mon.vm05 (mon.0) 871 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.144885+0000 mon.vm05 (mon.0) 872 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.144885+0000 mon.vm05 (mon.0) 872 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.146938+0000 mon.vm05 (mon.0) 873 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.146938+0000 mon.vm05 (mon.0) 873 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.156916+0000 mon.vm05 (mon.0) 874 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:02.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:01 vm05 bash[17520]: audit 2026-03-10T07:22:01.156916+0000 mon.vm05 (mon.0) 874 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:03.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:02 vm09 bash[21099]: audit 2026-03-10T07:22:01.567661+0000 mgr.vm05.wnsmpp (mgr.14195) 258 : audit [DBG] from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:03.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:02 vm09 bash[21099]: audit 2026-03-10T07:22:01.567661+0000 mgr.vm05.wnsmpp (mgr.14195) 258 : audit [DBG] from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:03.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:02 vm09 bash[21099]: cluster 2026-03-10T07:22:02.279063+0000 mgr.vm05.wnsmpp (mgr.14195) 259 : cluster [DBG] pgmap v150: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 597 B/s wr, 0 op/s 2026-03-10T07:22:03.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:02 vm09 bash[21099]: cluster 2026-03-10T07:22:02.279063+0000 mgr.vm05.wnsmpp (mgr.14195) 259 : cluster [DBG] pgmap v150: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 597 B/s wr, 0 op/s 2026-03-10T07:22:03.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:02 vm05 bash[17520]: audit 2026-03-10T07:22:01.567661+0000 mgr.vm05.wnsmpp (mgr.14195) 258 : audit [DBG] from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:03.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:02 vm05 bash[17520]: audit 2026-03-10T07:22:01.567661+0000 mgr.vm05.wnsmpp (mgr.14195) 258 : audit [DBG] from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:03.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:02 vm05 bash[17520]: cluster 2026-03-10T07:22:02.279063+0000 mgr.vm05.wnsmpp (mgr.14195) 259 : cluster [DBG] pgmap v150: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 597 B/s wr, 0 op/s 2026-03-10T07:22:03.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:02 vm05 bash[17520]: cluster 2026-03-10T07:22:02.279063+0000 mgr.vm05.wnsmpp (mgr.14195) 259 : cluster [DBG] pgmap v150: 97 pgs: 97 active+clean; 457 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 597 B/s wr, 0 op/s 2026-03-10T07:22:04.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:03 vm09 bash[21099]: audit 2026-03-10T07:22:02.746270+0000 mgr.vm05.wnsmpp (mgr.14195) 260 : audit [DBG] from='client.14632 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:04.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:03 vm09 bash[21099]: audit 2026-03-10T07:22:02.746270+0000 mgr.vm05.wnsmpp (mgr.14195) 260 : audit [DBG] from='client.14632 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:04.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:03 vm05 bash[17520]: audit 2026-03-10T07:22:02.746270+0000 mgr.vm05.wnsmpp (mgr.14195) 260 : audit [DBG] from='client.14632 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:04.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:03 vm05 bash[17520]: audit 2026-03-10T07:22:02.746270+0000 mgr.vm05.wnsmpp (mgr.14195) 260 : audit [DBG] from='client.14632 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:05.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:04 vm09 bash[21099]: audit 2026-03-10T07:22:03.934300+0000 mgr.vm05.wnsmpp (mgr.14195) 261 : audit [DBG] from='client.14636 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:05.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:04 vm09 bash[21099]: audit 2026-03-10T07:22:03.934300+0000 mgr.vm05.wnsmpp (mgr.14195) 261 : audit [DBG] from='client.14636 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:05.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:04 vm09 bash[21099]: cluster 2026-03-10T07:22:04.279520+0000 mgr.vm05.wnsmpp (mgr.14195) 262 : cluster [DBG] pgmap v151: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 682 B/s wr, 0 op/s 2026-03-10T07:22:05.171 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:04 vm09 bash[21099]: cluster 2026-03-10T07:22:04.279520+0000 mgr.vm05.wnsmpp (mgr.14195) 262 : cluster [DBG] pgmap v151: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 682 B/s wr, 0 op/s 2026-03-10T07:22:05.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:04 vm05 bash[17520]: audit 2026-03-10T07:22:03.934300+0000 mgr.vm05.wnsmpp (mgr.14195) 261 : audit [DBG] from='client.14636 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:05.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:04 vm05 bash[17520]: audit 2026-03-10T07:22:03.934300+0000 mgr.vm05.wnsmpp (mgr.14195) 261 : audit [DBG] from='client.14636 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:05.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:04 vm05 bash[17520]: cluster 2026-03-10T07:22:04.279520+0000 mgr.vm05.wnsmpp (mgr.14195) 262 : cluster [DBG] pgmap v151: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 682 B/s wr, 0 op/s 2026-03-10T07:22:05.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:04 vm05 bash[17520]: cluster 2026-03-10T07:22:04.279520+0000 mgr.vm05.wnsmpp (mgr.14195) 262 : cluster [DBG] pgmap v151: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 682 B/s wr, 0 op/s 2026-03-10T07:22:06.065 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:05 vm05 bash[17520]: audit 2026-03-10T07:22:05.093139+0000 mgr.vm05.wnsmpp (mgr.14195) 263 : audit [DBG] from='client.14640 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:06.065 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:05 vm05 bash[17520]: audit 2026-03-10T07:22:05.093139+0000 mgr.vm05.wnsmpp (mgr.14195) 263 : audit [DBG] from='client.14640 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:06.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:05 vm09 bash[21099]: audit 2026-03-10T07:22:05.093139+0000 mgr.vm05.wnsmpp (mgr.14195) 263 : audit [DBG] from='client.14640 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:06.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:05 vm09 bash[21099]: audit 2026-03-10T07:22:05.093139+0000 mgr.vm05.wnsmpp (mgr.14195) 263 : audit [DBG] from='client.14640 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:07.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:07 vm09 bash[21099]: audit 2026-03-10T07:22:06.149437+0000 mon.vm05 (mon.0) 875 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:07.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:07 vm09 bash[21099]: audit 2026-03-10T07:22:06.149437+0000 mon.vm05 (mon.0) 875 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:07.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:07 vm09 bash[21099]: audit 2026-03-10T07:22:06.155420+0000 mon.vm05 (mon.0) 876 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:07.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:07 vm09 bash[21099]: audit 2026-03-10T07:22:06.155420+0000 mon.vm05 (mon.0) 876 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:07.421 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:07 vm09 bash[21099]: audit 2026-03-10T07:22:06.190053+0000 mon.vm05 (mon.0) 877 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:07.421 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:07 vm09 bash[21099]: audit 2026-03-10T07:22:06.190053+0000 mon.vm05 (mon.0) 877 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:07.421 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:07 vm09 bash[21099]: cluster 2026-03-10T07:22:06.279909+0000 mgr.vm05.wnsmpp (mgr.14195) 264 : cluster [DBG] pgmap v152: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:07.421 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:07 vm09 bash[21099]: cluster 2026-03-10T07:22:06.279909+0000 mgr.vm05.wnsmpp (mgr.14195) 264 : cluster [DBG] pgmap v152: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:07.421 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:07 vm09 bash[21099]: audit 2026-03-10T07:22:06.283259+0000 mgr.vm05.wnsmpp (mgr.14195) 265 : audit [DBG] from='client.14644 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:07.421 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:07 vm09 bash[21099]: audit 2026-03-10T07:22:06.283259+0000 mgr.vm05.wnsmpp (mgr.14195) 265 : audit [DBG] from='client.14644 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:07.432 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:07 vm05 bash[17520]: audit 2026-03-10T07:22:06.149437+0000 mon.vm05 (mon.0) 875 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:07.432 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:07 vm05 bash[17520]: audit 2026-03-10T07:22:06.149437+0000 mon.vm05 (mon.0) 875 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:07.432 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:07 vm05 bash[17520]: audit 2026-03-10T07:22:06.155420+0000 mon.vm05 (mon.0) 876 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:07.432 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:07 vm05 bash[17520]: audit 2026-03-10T07:22:06.155420+0000 mon.vm05 (mon.0) 876 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:07.432 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:07 vm05 bash[17520]: audit 2026-03-10T07:22:06.190053+0000 mon.vm05 (mon.0) 877 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:07.432 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:07 vm05 bash[17520]: audit 2026-03-10T07:22:06.190053+0000 mon.vm05 (mon.0) 877 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:07.432 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:07 vm05 bash[17520]: cluster 2026-03-10T07:22:06.279909+0000 mgr.vm05.wnsmpp (mgr.14195) 264 : cluster [DBG] pgmap v152: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:07.432 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:07 vm05 bash[17520]: cluster 2026-03-10T07:22:06.279909+0000 mgr.vm05.wnsmpp (mgr.14195) 264 : cluster [DBG] pgmap v152: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:07.432 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:07 vm05 bash[17520]: audit 2026-03-10T07:22:06.283259+0000 mgr.vm05.wnsmpp (mgr.14195) 265 : audit [DBG] from='client.14644 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:07.432 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:07 vm05 bash[17520]: audit 2026-03-10T07:22:06.283259+0000 mgr.vm05.wnsmpp (mgr.14195) 265 : audit [DBG] from='client.14644 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:09.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:09 vm09 bash[21099]: audit 2026-03-10T07:22:07.473846+0000 mgr.vm05.wnsmpp (mgr.14195) 266 : audit [DBG] from='client.14648 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:09.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:09 vm09 bash[21099]: audit 2026-03-10T07:22:07.473846+0000 mgr.vm05.wnsmpp (mgr.14195) 266 : audit [DBG] from='client.14648 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:09.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:09 vm09 bash[21099]: cluster 2026-03-10T07:22:08.280375+0000 mgr.vm05.wnsmpp (mgr.14195) 267 : cluster [DBG] pgmap v153: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:09.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:09 vm09 bash[21099]: cluster 2026-03-10T07:22:08.280375+0000 mgr.vm05.wnsmpp (mgr.14195) 267 : cluster [DBG] pgmap v153: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:09.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:09 vm05 bash[17520]: audit 2026-03-10T07:22:07.473846+0000 mgr.vm05.wnsmpp (mgr.14195) 266 : audit [DBG] from='client.14648 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:09.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:09 vm05 bash[17520]: audit 2026-03-10T07:22:07.473846+0000 mgr.vm05.wnsmpp (mgr.14195) 266 : audit [DBG] from='client.14648 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:09.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:09 vm05 bash[17520]: cluster 2026-03-10T07:22:08.280375+0000 mgr.vm05.wnsmpp (mgr.14195) 267 : cluster [DBG] pgmap v153: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:09.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:09 vm05 bash[17520]: cluster 2026-03-10T07:22:08.280375+0000 mgr.vm05.wnsmpp (mgr.14195) 267 : cluster [DBG] pgmap v153: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:10.624 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:10 vm09 bash[21099]: audit 2026-03-10T07:22:08.650126+0000 mgr.vm05.wnsmpp (mgr.14195) 268 : audit [DBG] from='client.14652 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:10.624 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:10 vm09 bash[21099]: audit 2026-03-10T07:22:08.650126+0000 mgr.vm05.wnsmpp (mgr.14195) 268 : audit [DBG] from='client.14652 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:10.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:10 vm05 bash[17520]: audit 2026-03-10T07:22:08.650126+0000 mgr.vm05.wnsmpp (mgr.14195) 268 : audit [DBG] from='client.14652 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:10.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:10 vm05 bash[17520]: audit 2026-03-10T07:22:08.650126+0000 mgr.vm05.wnsmpp (mgr.14195) 268 : audit [DBG] from='client.14652 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:11.013 INFO:teuthology.orchestra.run.vm05.stdout:haproxy.nfs.foo.vm05.yhprte vm05 *:2049,9002 stopped 0s ago 72s - - 2026-03-10T07:22:11.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:11 vm09 bash[21099]: audit 2026-03-10T07:22:09.819646+0000 mgr.vm05.wnsmpp (mgr.14195) 269 : audit [DBG] from='client.14656 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:11.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:11 vm09 bash[21099]: audit 2026-03-10T07:22:09.819646+0000 mgr.vm05.wnsmpp (mgr.14195) 269 : audit [DBG] from='client.14656 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:11.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:11 vm09 bash[21099]: cluster 2026-03-10T07:22:10.280891+0000 mgr.vm05.wnsmpp (mgr.14195) 270 : cluster [DBG] pgmap v154: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:11.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:11 vm09 bash[21099]: cluster 2026-03-10T07:22:10.280891+0000 mgr.vm05.wnsmpp (mgr.14195) 270 : cluster [DBG] pgmap v154: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:11.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:11 vm09 bash[21099]: audit 2026-03-10T07:22:10.800705+0000 mon.vm05 (mon.0) 878 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:11.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:11 vm09 bash[21099]: audit 2026-03-10T07:22:10.800705+0000 mon.vm05 (mon.0) 878 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:11.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:11 vm09 bash[21099]: audit 2026-03-10T07:22:10.806946+0000 mon.vm05 (mon.0) 879 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:11.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:11 vm09 bash[21099]: audit 2026-03-10T07:22:10.806946+0000 mon.vm05 (mon.0) 879 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:11.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:11 vm05 bash[17520]: audit 2026-03-10T07:22:09.819646+0000 mgr.vm05.wnsmpp (mgr.14195) 269 : audit [DBG] from='client.14656 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:11.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:11 vm05 bash[17520]: audit 2026-03-10T07:22:09.819646+0000 mgr.vm05.wnsmpp (mgr.14195) 269 : audit [DBG] from='client.14656 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:11.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:11 vm05 bash[17520]: cluster 2026-03-10T07:22:10.280891+0000 mgr.vm05.wnsmpp (mgr.14195) 270 : cluster [DBG] pgmap v154: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:11.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:11 vm05 bash[17520]: cluster 2026-03-10T07:22:10.280891+0000 mgr.vm05.wnsmpp (mgr.14195) 270 : cluster [DBG] pgmap v154: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:11.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:11 vm05 bash[17520]: audit 2026-03-10T07:22:10.800705+0000 mon.vm05 (mon.0) 878 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:11.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:11 vm05 bash[17520]: audit 2026-03-10T07:22:10.800705+0000 mon.vm05 (mon.0) 878 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:11.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:11 vm05 bash[17520]: audit 2026-03-10T07:22:10.806946+0000 mon.vm05 (mon.0) 879 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:11.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:11 vm05 bash[17520]: audit 2026-03-10T07:22:10.806946+0000 mon.vm05 (mon.0) 879 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:10.997361+0000 mgr.vm05.wnsmpp (mgr.14195) 271 : audit [DBG] from='client.24437 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:10.997361+0000 mgr.vm05.wnsmpp (mgr.14195) 271 : audit [DBG] from='client.24437 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.486995+0000 mon.vm05 (mon.0) 880 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.486995+0000 mon.vm05 (mon.0) 880 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.492729+0000 mon.vm05 (mon.0) 881 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.492729+0000 mon.vm05 (mon.0) 881 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.493728+0000 mon.vm05 (mon.0) 882 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.493728+0000 mon.vm05 (mon.0) 882 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.494222+0000 mon.vm05 (mon.0) 883 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.494222+0000 mon.vm05 (mon.0) 883 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.498353+0000 mon.vm05 (mon.0) 884 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.498353+0000 mon.vm05 (mon.0) 884 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.500036+0000 mon.vm05 (mon.0) 885 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.500036+0000 mon.vm05 (mon.0) 885 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.506643+0000 mon.vm05 (mon.0) 886 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.625 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:12 vm09 bash[21099]: audit 2026-03-10T07:22:11.506643+0000 mon.vm05 (mon.0) 886 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:10.997361+0000 mgr.vm05.wnsmpp (mgr.14195) 271 : audit [DBG] from='client.24437 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:10.997361+0000 mgr.vm05.wnsmpp (mgr.14195) 271 : audit [DBG] from='client.24437 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.486995+0000 mon.vm05 (mon.0) 880 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.486995+0000 mon.vm05 (mon.0) 880 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.492729+0000 mon.vm05 (mon.0) 881 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.492729+0000 mon.vm05 (mon.0) 881 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.493728+0000 mon.vm05 (mon.0) 882 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.493728+0000 mon.vm05 (mon.0) 882 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.494222+0000 mon.vm05 (mon.0) 883 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.494222+0000 mon.vm05 (mon.0) 883 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.498353+0000 mon.vm05 (mon.0) 884 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.498353+0000 mon.vm05 (mon.0) 884 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.500036+0000 mon.vm05 (mon.0) 885 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.500036+0000 mon.vm05 (mon.0) 885 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.506643+0000 mon.vm05 (mon.0) 886 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:12 vm05 bash[17520]: audit 2026-03-10T07:22:11.506643+0000 mon.vm05 (mon.0) 886 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:13.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:13 vm09 bash[21099]: cluster 2026-03-10T07:22:12.281303+0000 mgr.vm05.wnsmpp (mgr.14195) 272 : cluster [DBG] pgmap v155: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:13.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:13 vm09 bash[21099]: cluster 2026-03-10T07:22:12.281303+0000 mgr.vm05.wnsmpp (mgr.14195) 272 : cluster [DBG] pgmap v155: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:13.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:13 vm09 bash[21099]: audit 2026-03-10T07:22:12.676738+0000 mon.vm05 (mon.0) 887 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:13.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:13 vm09 bash[21099]: audit 2026-03-10T07:22:12.676738+0000 mon.vm05 (mon.0) 887 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:13.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:13 vm05 bash[17520]: cluster 2026-03-10T07:22:12.281303+0000 mgr.vm05.wnsmpp (mgr.14195) 272 : cluster [DBG] pgmap v155: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:13.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:13 vm05 bash[17520]: cluster 2026-03-10T07:22:12.281303+0000 mgr.vm05.wnsmpp (mgr.14195) 272 : cluster [DBG] pgmap v155: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:13.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:13 vm05 bash[17520]: audit 2026-03-10T07:22:12.676738+0000 mon.vm05 (mon.0) 887 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:13.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:13 vm05 bash[17520]: audit 2026-03-10T07:22:12.676738+0000 mon.vm05 (mon.0) 887 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:14.028 INFO:teuthology.orchestra.run.vm05.stdout:test 2026-03-10T07:22:14.220 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled to start haproxy.nfs.foo.vm05.yhprte on host 'vm05' 2026-03-10T07:22:15.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: audit 2026-03-10T07:22:14.205500+0000 mgr.vm05.wnsmpp (mgr.14195) 273 : audit [DBG] from='client.14664 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "start", "name": "haproxy.nfs.foo.vm05.yhprte", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:15.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: audit 2026-03-10T07:22:14.205500+0000 mgr.vm05.wnsmpp (mgr.14195) 273 : audit [DBG] from='client.14664 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "start", "name": "haproxy.nfs.foo.vm05.yhprte", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:15.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: cephadm 2026-03-10T07:22:14.205919+0000 mgr.vm05.wnsmpp (mgr.14195) 274 : cephadm [INF] Schedule start daemon haproxy.nfs.foo.vm05.yhprte 2026-03-10T07:22:15.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: cephadm 2026-03-10T07:22:14.205919+0000 mgr.vm05.wnsmpp (mgr.14195) 274 : cephadm [INF] Schedule start daemon haproxy.nfs.foo.vm05.yhprte 2026-03-10T07:22:15.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: audit 2026-03-10T07:22:14.212494+0000 mon.vm05 (mon.0) 888 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:15.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: audit 2026-03-10T07:22:14.212494+0000 mon.vm05 (mon.0) 888 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:15.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: audit 2026-03-10T07:22:14.219739+0000 mon.vm05 (mon.0) 889 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:15.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: audit 2026-03-10T07:22:14.219739+0000 mon.vm05 (mon.0) 889 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:15.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: audit 2026-03-10T07:22:14.221824+0000 mon.vm05 (mon.0) 890 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:15.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: audit 2026-03-10T07:22:14.221824+0000 mon.vm05 (mon.0) 890 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:15.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: cluster 2026-03-10T07:22:14.281722+0000 mgr.vm05.wnsmpp (mgr.14195) 275 : cluster [DBG] pgmap v156: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:22:15.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: cluster 2026-03-10T07:22:14.281722+0000 mgr.vm05.wnsmpp (mgr.14195) 275 : cluster [DBG] pgmap v156: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:22:15.421 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: audit 2026-03-10T07:22:14.415850+0000 mgr.vm05.wnsmpp (mgr.14195) 276 : audit [DBG] from='client.14668 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:15.421 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:15 vm09 bash[21099]: audit 2026-03-10T07:22:14.415850+0000 mgr.vm05.wnsmpp (mgr.14195) 276 : audit [DBG] from='client.14668 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:15.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: audit 2026-03-10T07:22:14.205500+0000 mgr.vm05.wnsmpp (mgr.14195) 273 : audit [DBG] from='client.14664 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "start", "name": "haproxy.nfs.foo.vm05.yhprte", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:15.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: audit 2026-03-10T07:22:14.205500+0000 mgr.vm05.wnsmpp (mgr.14195) 273 : audit [DBG] from='client.14664 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "start", "name": "haproxy.nfs.foo.vm05.yhprte", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:15.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: cephadm 2026-03-10T07:22:14.205919+0000 mgr.vm05.wnsmpp (mgr.14195) 274 : cephadm [INF] Schedule start daemon haproxy.nfs.foo.vm05.yhprte 2026-03-10T07:22:15.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: cephadm 2026-03-10T07:22:14.205919+0000 mgr.vm05.wnsmpp (mgr.14195) 274 : cephadm [INF] Schedule start daemon haproxy.nfs.foo.vm05.yhprte 2026-03-10T07:22:15.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: audit 2026-03-10T07:22:14.212494+0000 mon.vm05 (mon.0) 888 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:15.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: audit 2026-03-10T07:22:14.212494+0000 mon.vm05 (mon.0) 888 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:15.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: audit 2026-03-10T07:22:14.219739+0000 mon.vm05 (mon.0) 889 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:15.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: audit 2026-03-10T07:22:14.219739+0000 mon.vm05 (mon.0) 889 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:15.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: audit 2026-03-10T07:22:14.221824+0000 mon.vm05 (mon.0) 890 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:15.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: audit 2026-03-10T07:22:14.221824+0000 mon.vm05 (mon.0) 890 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:15.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: cluster 2026-03-10T07:22:14.281722+0000 mgr.vm05.wnsmpp (mgr.14195) 275 : cluster [DBG] pgmap v156: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:22:15.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: cluster 2026-03-10T07:22:14.281722+0000 mgr.vm05.wnsmpp (mgr.14195) 275 : cluster [DBG] pgmap v156: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:22:15.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: audit 2026-03-10T07:22:14.415850+0000 mgr.vm05.wnsmpp (mgr.14195) 276 : audit [DBG] from='client.14668 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:15.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:15 vm05 bash[17520]: audit 2026-03-10T07:22:14.415850+0000 mgr.vm05.wnsmpp (mgr.14195) 276 : audit [DBG] from='client.14668 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:17.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:16 vm09 bash[21099]: audit 2026-03-10T07:22:15.596122+0000 mgr.vm05.wnsmpp (mgr.14195) 277 : audit [DBG] from='client.14672 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:17.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:16 vm09 bash[21099]: audit 2026-03-10T07:22:15.596122+0000 mgr.vm05.wnsmpp (mgr.14195) 277 : audit [DBG] from='client.14672 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:17.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:16 vm09 bash[21099]: cluster 2026-03-10T07:22:16.282118+0000 mgr.vm05.wnsmpp (mgr.14195) 278 : cluster [DBG] pgmap v157: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s 2026-03-10T07:22:17.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:16 vm09 bash[21099]: cluster 2026-03-10T07:22:16.282118+0000 mgr.vm05.wnsmpp (mgr.14195) 278 : cluster [DBG] pgmap v157: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s 2026-03-10T07:22:17.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:16 vm05 bash[17520]: audit 2026-03-10T07:22:15.596122+0000 mgr.vm05.wnsmpp (mgr.14195) 277 : audit [DBG] from='client.14672 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:17.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:16 vm05 bash[17520]: audit 2026-03-10T07:22:15.596122+0000 mgr.vm05.wnsmpp (mgr.14195) 277 : audit [DBG] from='client.14672 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:17.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:16 vm05 bash[17520]: cluster 2026-03-10T07:22:16.282118+0000 mgr.vm05.wnsmpp (mgr.14195) 278 : cluster [DBG] pgmap v157: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s 2026-03-10T07:22:17.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:16 vm05 bash[17520]: cluster 2026-03-10T07:22:16.282118+0000 mgr.vm05.wnsmpp (mgr.14195) 278 : cluster [DBG] pgmap v157: 97 pgs: 97 active+clean; 459 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s 2026-03-10T07:22:18.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:17 vm05 bash[17520]: audit 2026-03-10T07:22:16.765835+0000 mgr.vm05.wnsmpp (mgr.14195) 279 : audit [DBG] from='client.14676 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:18.211 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:17 vm05 bash[17520]: audit 2026-03-10T07:22:16.765835+0000 mgr.vm05.wnsmpp (mgr.14195) 279 : audit [DBG] from='client.14676 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:18.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:17 vm09 bash[21099]: audit 2026-03-10T07:22:16.765835+0000 mgr.vm05.wnsmpp (mgr.14195) 279 : audit [DBG] from='client.14676 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:18.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:17 vm09 bash[21099]: audit 2026-03-10T07:22:16.765835+0000 mgr.vm05.wnsmpp (mgr.14195) 279 : audit [DBG] from='client.14676 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:19.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:18 vm09 bash[21099]: audit 2026-03-10T07:22:17.930429+0000 mgr.vm05.wnsmpp (mgr.14195) 280 : audit [DBG] from='client.14680 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:19.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:18 vm09 bash[21099]: audit 2026-03-10T07:22:17.930429+0000 mgr.vm05.wnsmpp (mgr.14195) 280 : audit [DBG] from='client.14680 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:19.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:18 vm09 bash[21099]: cluster 2026-03-10T07:22:18.282533+0000 mgr.vm05.wnsmpp (mgr.14195) 281 : cluster [DBG] pgmap v158: 97 pgs: 97 active+clean; 459 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s 2026-03-10T07:22:19.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:18 vm09 bash[21099]: cluster 2026-03-10T07:22:18.282533+0000 mgr.vm05.wnsmpp (mgr.14195) 281 : cluster [DBG] pgmap v158: 97 pgs: 97 active+clean; 459 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s 2026-03-10T07:22:19.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:18 vm05 bash[17520]: audit 2026-03-10T07:22:17.930429+0000 mgr.vm05.wnsmpp (mgr.14195) 280 : audit [DBG] from='client.14680 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:19.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:18 vm05 bash[17520]: audit 2026-03-10T07:22:17.930429+0000 mgr.vm05.wnsmpp (mgr.14195) 280 : audit [DBG] from='client.14680 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:19.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:18 vm05 bash[17520]: cluster 2026-03-10T07:22:18.282533+0000 mgr.vm05.wnsmpp (mgr.14195) 281 : cluster [DBG] pgmap v158: 97 pgs: 97 active+clean; 459 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s 2026-03-10T07:22:19.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:18 vm05 bash[17520]: cluster 2026-03-10T07:22:18.282533+0000 mgr.vm05.wnsmpp (mgr.14195) 281 : cluster [DBG] pgmap v158: 97 pgs: 97 active+clean; 459 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.127955+0000 mgr.vm05.wnsmpp (mgr.14195) 282 : audit [DBG] from='client.14684 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.127955+0000 mgr.vm05.wnsmpp (mgr.14195) 282 : audit [DBG] from='client.14684 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.555377+0000 mon.vm05 (mon.0) 891 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.555377+0000 mon.vm05 (mon.0) 891 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.563608+0000 mon.vm05 (mon.0) 892 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.563608+0000 mon.vm05 (mon.0) 892 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.865332+0000 mon.vm05 (mon.0) 893 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.865332+0000 mon.vm05 (mon.0) 893 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.872897+0000 mon.vm05 (mon.0) 894 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.872897+0000 mon.vm05 (mon.0) 894 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.874106+0000 mon.vm05 (mon.0) 895 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.874106+0000 mon.vm05 (mon.0) 895 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.874739+0000 mon.vm05 (mon.0) 896 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.874739+0000 mon.vm05 (mon.0) 896 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.880240+0000 mon.vm05 (mon.0) 897 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.880240+0000 mon.vm05 (mon.0) 897 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.882752+0000 mon.vm05 (mon.0) 898 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.882752+0000 mon.vm05 (mon.0) 898 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.890705+0000 mon.vm05 (mon.0) 899 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.420 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:19 vm09 bash[21099]: audit 2026-03-10T07:22:19.890705+0000 mon.vm05 (mon.0) 899 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.127955+0000 mgr.vm05.wnsmpp (mgr.14195) 282 : audit [DBG] from='client.14684 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.127955+0000 mgr.vm05.wnsmpp (mgr.14195) 282 : audit [DBG] from='client.14684 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.555377+0000 mon.vm05 (mon.0) 891 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.555377+0000 mon.vm05 (mon.0) 891 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.563608+0000 mon.vm05 (mon.0) 892 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.563608+0000 mon.vm05 (mon.0) 892 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.865332+0000 mon.vm05 (mon.0) 893 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.865332+0000 mon.vm05 (mon.0) 893 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.872897+0000 mon.vm05 (mon.0) 894 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.872897+0000 mon.vm05 (mon.0) 894 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.874106+0000 mon.vm05 (mon.0) 895 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.874106+0000 mon.vm05 (mon.0) 895 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.874739+0000 mon.vm05 (mon.0) 896 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.874739+0000 mon.vm05 (mon.0) 896 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.880240+0000 mon.vm05 (mon.0) 897 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.880240+0000 mon.vm05 (mon.0) 897 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.882752+0000 mon.vm05 (mon.0) 898 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.882752+0000 mon.vm05 (mon.0) 898 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.890705+0000 mon.vm05 (mon.0) 899 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:20.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:19 vm05 bash[17520]: audit 2026-03-10T07:22:19.890705+0000 mon.vm05 (mon.0) 899 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:21.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:20 vm09 bash[21099]: cluster 2026-03-10T07:22:20.282964+0000 mgr.vm05.wnsmpp (mgr.14195) 283 : cluster [DBG] pgmap v159: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:22:21.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:20 vm09 bash[21099]: cluster 2026-03-10T07:22:20.282964+0000 mgr.vm05.wnsmpp (mgr.14195) 283 : cluster [DBG] pgmap v159: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:22:21.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:20 vm09 bash[21099]: audit 2026-03-10T07:22:20.317868+0000 mgr.vm05.wnsmpp (mgr.14195) 284 : audit [DBG] from='client.14688 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:21.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:20 vm09 bash[21099]: audit 2026-03-10T07:22:20.317868+0000 mgr.vm05.wnsmpp (mgr.14195) 284 : audit [DBG] from='client.14688 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:21.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:20 vm05 bash[17520]: cluster 2026-03-10T07:22:20.282964+0000 mgr.vm05.wnsmpp (mgr.14195) 283 : cluster [DBG] pgmap v159: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:22:21.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:20 vm05 bash[17520]: cluster 2026-03-10T07:22:20.282964+0000 mgr.vm05.wnsmpp (mgr.14195) 283 : cluster [DBG] pgmap v159: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:22:21.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:20 vm05 bash[17520]: audit 2026-03-10T07:22:20.317868+0000 mgr.vm05.wnsmpp (mgr.14195) 284 : audit [DBG] from='client.14688 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:21.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:20 vm05 bash[17520]: audit 2026-03-10T07:22:20.317868+0000 mgr.vm05.wnsmpp (mgr.14195) 284 : audit [DBG] from='client.14688 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:23.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:23 vm09 bash[21099]: audit 2026-03-10T07:22:21.480228+0000 mgr.vm05.wnsmpp (mgr.14195) 285 : audit [DBG] from='client.14692 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:23.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:23 vm09 bash[21099]: audit 2026-03-10T07:22:21.480228+0000 mgr.vm05.wnsmpp (mgr.14195) 285 : audit [DBG] from='client.14692 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:23.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:23 vm09 bash[21099]: cluster 2026-03-10T07:22:22.283344+0000 mgr.vm05.wnsmpp (mgr.14195) 286 : cluster [DBG] pgmap v160: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:23.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:23 vm09 bash[21099]: cluster 2026-03-10T07:22:22.283344+0000 mgr.vm05.wnsmpp (mgr.14195) 286 : cluster [DBG] pgmap v160: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:23.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:23 vm05 bash[17520]: audit 2026-03-10T07:22:21.480228+0000 mgr.vm05.wnsmpp (mgr.14195) 285 : audit [DBG] from='client.14692 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:23.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:23 vm05 bash[17520]: audit 2026-03-10T07:22:21.480228+0000 mgr.vm05.wnsmpp (mgr.14195) 285 : audit [DBG] from='client.14692 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:23.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:23 vm05 bash[17520]: cluster 2026-03-10T07:22:22.283344+0000 mgr.vm05.wnsmpp (mgr.14195) 286 : cluster [DBG] pgmap v160: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:23.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:23 vm05 bash[17520]: cluster 2026-03-10T07:22:22.283344+0000 mgr.vm05.wnsmpp (mgr.14195) 286 : cluster [DBG] pgmap v160: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:24.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:24 vm09 bash[21099]: audit 2026-03-10T07:22:22.659266+0000 mgr.vm05.wnsmpp (mgr.14195) 287 : audit [DBG] from='client.14696 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:24.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:24 vm09 bash[21099]: audit 2026-03-10T07:22:22.659266+0000 mgr.vm05.wnsmpp (mgr.14195) 287 : audit [DBG] from='client.14696 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:24 vm05 bash[17520]: audit 2026-03-10T07:22:22.659266+0000 mgr.vm05.wnsmpp (mgr.14195) 287 : audit [DBG] from='client.14696 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:24.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:24 vm05 bash[17520]: audit 2026-03-10T07:22:22.659266+0000 mgr.vm05.wnsmpp (mgr.14195) 287 : audit [DBG] from='client.14696 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:25.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:25 vm09 bash[21099]: audit 2026-03-10T07:22:23.842402+0000 mgr.vm05.wnsmpp (mgr.14195) 288 : audit [DBG] from='client.14700 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:25.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:25 vm09 bash[21099]: audit 2026-03-10T07:22:23.842402+0000 mgr.vm05.wnsmpp (mgr.14195) 288 : audit [DBG] from='client.14700 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:25.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:25 vm09 bash[21099]: cluster 2026-03-10T07:22:24.283783+0000 mgr.vm05.wnsmpp (mgr.14195) 289 : cluster [DBG] pgmap v161: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:22:25.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:25 vm09 bash[21099]: cluster 2026-03-10T07:22:24.283783+0000 mgr.vm05.wnsmpp (mgr.14195) 289 : cluster [DBG] pgmap v161: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:22:25.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:25 vm05 bash[17520]: audit 2026-03-10T07:22:23.842402+0000 mgr.vm05.wnsmpp (mgr.14195) 288 : audit [DBG] from='client.14700 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:25.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:25 vm05 bash[17520]: audit 2026-03-10T07:22:23.842402+0000 mgr.vm05.wnsmpp (mgr.14195) 288 : audit [DBG] from='client.14700 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:25.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:25 vm05 bash[17520]: cluster 2026-03-10T07:22:24.283783+0000 mgr.vm05.wnsmpp (mgr.14195) 289 : cluster [DBG] pgmap v161: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:22:25.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:25 vm05 bash[17520]: cluster 2026-03-10T07:22:24.283783+0000 mgr.vm05.wnsmpp (mgr.14195) 289 : cluster [DBG] pgmap v161: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:22:26.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:26 vm09 bash[21099]: audit 2026-03-10T07:22:25.023095+0000 mgr.vm05.wnsmpp (mgr.14195) 290 : audit [DBG] from='client.14704 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:26.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:26 vm09 bash[21099]: audit 2026-03-10T07:22:25.023095+0000 mgr.vm05.wnsmpp (mgr.14195) 290 : audit [DBG] from='client.14704 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:26.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:26 vm05 bash[17520]: audit 2026-03-10T07:22:25.023095+0000 mgr.vm05.wnsmpp (mgr.14195) 290 : audit [DBG] from='client.14704 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:26.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:26 vm05 bash[17520]: audit 2026-03-10T07:22:25.023095+0000 mgr.vm05.wnsmpp (mgr.14195) 290 : audit [DBG] from='client.14704 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:27.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:27 vm09 bash[21099]: audit 2026-03-10T07:22:26.214419+0000 mgr.vm05.wnsmpp (mgr.14195) 291 : audit [DBG] from='client.14708 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:27.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:27 vm09 bash[21099]: audit 2026-03-10T07:22:26.214419+0000 mgr.vm05.wnsmpp (mgr.14195) 291 : audit [DBG] from='client.14708 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:27.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:27 vm09 bash[21099]: cluster 2026-03-10T07:22:26.284156+0000 mgr.vm05.wnsmpp (mgr.14195) 292 : cluster [DBG] pgmap v162: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.1 KiB/s wr, 0 op/s 2026-03-10T07:22:27.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:27 vm09 bash[21099]: cluster 2026-03-10T07:22:26.284156+0000 mgr.vm05.wnsmpp (mgr.14195) 292 : cluster [DBG] pgmap v162: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.1 KiB/s wr, 0 op/s 2026-03-10T07:22:27.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:27 vm05 bash[17520]: audit 2026-03-10T07:22:26.214419+0000 mgr.vm05.wnsmpp (mgr.14195) 291 : audit [DBG] from='client.14708 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:27.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:27 vm05 bash[17520]: audit 2026-03-10T07:22:26.214419+0000 mgr.vm05.wnsmpp (mgr.14195) 291 : audit [DBG] from='client.14708 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:27.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:27 vm05 bash[17520]: cluster 2026-03-10T07:22:26.284156+0000 mgr.vm05.wnsmpp (mgr.14195) 292 : cluster [DBG] pgmap v162: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.1 KiB/s wr, 0 op/s 2026-03-10T07:22:27.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:27 vm05 bash[17520]: cluster 2026-03-10T07:22:26.284156+0000 mgr.vm05.wnsmpp (mgr.14195) 292 : cluster [DBG] pgmap v162: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.1 KiB/s wr, 0 op/s 2026-03-10T07:22:28.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:28 vm09 bash[21099]: audit 2026-03-10T07:22:27.391818+0000 mgr.vm05.wnsmpp (mgr.14195) 293 : audit [DBG] from='client.14712 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:28.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:28 vm09 bash[21099]: audit 2026-03-10T07:22:27.391818+0000 mgr.vm05.wnsmpp (mgr.14195) 293 : audit [DBG] from='client.14712 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:28.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:28 vm09 bash[21099]: audit 2026-03-10T07:22:27.677035+0000 mon.vm05 (mon.0) 900 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:28.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:28 vm09 bash[21099]: audit 2026-03-10T07:22:27.677035+0000 mon.vm05 (mon.0) 900 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:28.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:28 vm05 bash[17520]: audit 2026-03-10T07:22:27.391818+0000 mgr.vm05.wnsmpp (mgr.14195) 293 : audit [DBG] from='client.14712 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:28.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:28 vm05 bash[17520]: audit 2026-03-10T07:22:27.391818+0000 mgr.vm05.wnsmpp (mgr.14195) 293 : audit [DBG] from='client.14712 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:28.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:28 vm05 bash[17520]: audit 2026-03-10T07:22:27.677035+0000 mon.vm05 (mon.0) 900 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:28.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:28 vm05 bash[17520]: audit 2026-03-10T07:22:27.677035+0000 mon.vm05 (mon.0) 900 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:29.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:29 vm09 bash[21099]: cluster 2026-03-10T07:22:28.284511+0000 mgr.vm05.wnsmpp (mgr.14195) 294 : cluster [DBG] pgmap v163: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.1 KiB/s wr, 0 op/s 2026-03-10T07:22:29.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:29 vm09 bash[21099]: cluster 2026-03-10T07:22:28.284511+0000 mgr.vm05.wnsmpp (mgr.14195) 294 : cluster [DBG] pgmap v163: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.1 KiB/s wr, 0 op/s 2026-03-10T07:22:29.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:29 vm09 bash[21099]: audit 2026-03-10T07:22:28.820198+0000 mon.vm05 (mon.0) 901 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:29.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:29 vm09 bash[21099]: audit 2026-03-10T07:22:28.820198+0000 mon.vm05 (mon.0) 901 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:29.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:29 vm09 bash[21099]: audit 2026-03-10T07:22:28.825994+0000 mon.vm05 (mon.0) 902 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:29.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:29 vm09 bash[21099]: audit 2026-03-10T07:22:28.825994+0000 mon.vm05 (mon.0) 902 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:29.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:29 vm09 bash[21099]: audit 2026-03-10T07:22:28.859806+0000 mon.vm05 (mon.0) 903 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:29.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:29 vm09 bash[21099]: audit 2026-03-10T07:22:28.859806+0000 mon.vm05 (mon.0) 903 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:29.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:29 vm05 bash[17520]: cluster 2026-03-10T07:22:28.284511+0000 mgr.vm05.wnsmpp (mgr.14195) 294 : cluster [DBG] pgmap v163: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.1 KiB/s wr, 0 op/s 2026-03-10T07:22:29.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:29 vm05 bash[17520]: cluster 2026-03-10T07:22:28.284511+0000 mgr.vm05.wnsmpp (mgr.14195) 294 : cluster [DBG] pgmap v163: 97 pgs: 97 active+clean; 468 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.1 KiB/s wr, 0 op/s 2026-03-10T07:22:29.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:29 vm05 bash[17520]: audit 2026-03-10T07:22:28.820198+0000 mon.vm05 (mon.0) 901 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:29.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:29 vm05 bash[17520]: audit 2026-03-10T07:22:28.820198+0000 mon.vm05 (mon.0) 901 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:29.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:29 vm05 bash[17520]: audit 2026-03-10T07:22:28.825994+0000 mon.vm05 (mon.0) 902 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:29.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:29 vm05 bash[17520]: audit 2026-03-10T07:22:28.825994+0000 mon.vm05 (mon.0) 902 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:29.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:29 vm05 bash[17520]: audit 2026-03-10T07:22:28.859806+0000 mon.vm05 (mon.0) 903 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:29.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:29 vm05 bash[17520]: audit 2026-03-10T07:22:28.859806+0000 mon.vm05 (mon.0) 903 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:30.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:30 vm09 bash[21099]: audit 2026-03-10T07:22:28.559636+0000 mgr.vm05.wnsmpp (mgr.14195) 295 : audit [DBG] from='client.14716 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:30.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:30 vm09 bash[21099]: audit 2026-03-10T07:22:28.559636+0000 mgr.vm05.wnsmpp (mgr.14195) 295 : audit [DBG] from='client.14716 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:30.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:30 vm05 bash[17520]: audit 2026-03-10T07:22:28.559636+0000 mgr.vm05.wnsmpp (mgr.14195) 295 : audit [DBG] from='client.14716 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:30.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:30 vm05 bash[17520]: audit 2026-03-10T07:22:28.559636+0000 mgr.vm05.wnsmpp (mgr.14195) 295 : audit [DBG] from='client.14716 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:31.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:31 vm09 bash[21099]: audit 2026-03-10T07:22:29.738289+0000 mgr.vm05.wnsmpp (mgr.14195) 296 : audit [DBG] from='client.14720 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:31.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:31 vm09 bash[21099]: audit 2026-03-10T07:22:29.738289+0000 mgr.vm05.wnsmpp (mgr.14195) 296 : audit [DBG] from='client.14720 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:31.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:31 vm09 bash[21099]: cluster 2026-03-10T07:22:30.284893+0000 mgr.vm05.wnsmpp (mgr.14195) 297 : cluster [DBG] pgmap v164: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:22:31.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:31 vm09 bash[21099]: cluster 2026-03-10T07:22:30.284893+0000 mgr.vm05.wnsmpp (mgr.14195) 297 : cluster [DBG] pgmap v164: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:22:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:31 vm05 bash[17520]: audit 2026-03-10T07:22:29.738289+0000 mgr.vm05.wnsmpp (mgr.14195) 296 : audit [DBG] from='client.14720 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:31 vm05 bash[17520]: audit 2026-03-10T07:22:29.738289+0000 mgr.vm05.wnsmpp (mgr.14195) 296 : audit [DBG] from='client.14720 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:31 vm05 bash[17520]: cluster 2026-03-10T07:22:30.284893+0000 mgr.vm05.wnsmpp (mgr.14195) 297 : cluster [DBG] pgmap v164: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:22:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:31 vm05 bash[17520]: cluster 2026-03-10T07:22:30.284893+0000 mgr.vm05.wnsmpp (mgr.14195) 297 : cluster [DBG] pgmap v164: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:22:32.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:32 vm09 bash[21099]: audit 2026-03-10T07:22:30.921814+0000 mgr.vm05.wnsmpp (mgr.14195) 298 : audit [DBG] from='client.14724 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:32.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:32 vm09 bash[21099]: audit 2026-03-10T07:22:30.921814+0000 mgr.vm05.wnsmpp (mgr.14195) 298 : audit [DBG] from='client.14724 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:32 vm05 bash[17520]: audit 2026-03-10T07:22:30.921814+0000 mgr.vm05.wnsmpp (mgr.14195) 298 : audit [DBG] from='client.14724 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:32.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:32 vm05 bash[17520]: audit 2026-03-10T07:22:30.921814+0000 mgr.vm05.wnsmpp (mgr.14195) 298 : audit [DBG] from='client.14724 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:33.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:33 vm09 bash[21099]: audit 2026-03-10T07:22:32.094975+0000 mgr.vm05.wnsmpp (mgr.14195) 299 : audit [DBG] from='client.14728 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:33.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:33 vm09 bash[21099]: audit 2026-03-10T07:22:32.094975+0000 mgr.vm05.wnsmpp (mgr.14195) 299 : audit [DBG] from='client.14728 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:33.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:33 vm09 bash[21099]: cluster 2026-03-10T07:22:32.285255+0000 mgr.vm05.wnsmpp (mgr.14195) 300 : cluster [DBG] pgmap v165: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:33.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:33 vm09 bash[21099]: cluster 2026-03-10T07:22:32.285255+0000 mgr.vm05.wnsmpp (mgr.14195) 300 : cluster [DBG] pgmap v165: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:33.718 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:33 vm05 bash[17520]: audit 2026-03-10T07:22:32.094975+0000 mgr.vm05.wnsmpp (mgr.14195) 299 : audit [DBG] from='client.14728 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:33.718 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:33 vm05 bash[17520]: audit 2026-03-10T07:22:32.094975+0000 mgr.vm05.wnsmpp (mgr.14195) 299 : audit [DBG] from='client.14728 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:33.718 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:33 vm05 bash[17520]: cluster 2026-03-10T07:22:32.285255+0000 mgr.vm05.wnsmpp (mgr.14195) 300 : cluster [DBG] pgmap v165: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:33.718 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:33 vm05 bash[17520]: cluster 2026-03-10T07:22:32.285255+0000 mgr.vm05.wnsmpp (mgr.14195) 300 : cluster [DBG] pgmap v165: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:34.587 INFO:teuthology.orchestra.run.vm05.stdout:haproxy.nfs.foo.vm05.yhprte vm05 *:2049,9002 running (5s) 0s ago 95s 3476k - 2.3.17-d1c9119 e85424b0d443 7adcd637ba7d 2026-03-10T07:22:34.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:34 vm09 bash[21099]: audit 2026-03-10T07:22:33.359348+0000 mgr.vm05.wnsmpp (mgr.14195) 301 : audit [DBG] from='client.14732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:34.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:34 vm09 bash[21099]: audit 2026-03-10T07:22:33.359348+0000 mgr.vm05.wnsmpp (mgr.14195) 301 : audit [DBG] from='client.14732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:34.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:34 vm09 bash[21099]: audit 2026-03-10T07:22:33.740665+0000 mon.vm05 (mon.0) 904 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:34 vm09 bash[21099]: audit 2026-03-10T07:22:33.740665+0000 mon.vm05 (mon.0) 904 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:34 vm09 bash[21099]: audit 2026-03-10T07:22:33.747220+0000 mon.vm05 (mon.0) 905 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:34 vm09 bash[21099]: audit 2026-03-10T07:22:33.747220+0000 mon.vm05 (mon.0) 905 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:34 vm09 bash[21099]: audit 2026-03-10T07:22:34.179186+0000 mon.vm05 (mon.0) 906 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:34 vm09 bash[21099]: audit 2026-03-10T07:22:34.179186+0000 mon.vm05 (mon.0) 906 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:34 vm09 bash[21099]: audit 2026-03-10T07:22:34.227996+0000 mon.vm05 (mon.0) 907 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:34 vm09 bash[21099]: audit 2026-03-10T07:22:34.227996+0000 mon.vm05 (mon.0) 907 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:34 vm05 bash[17520]: audit 2026-03-10T07:22:33.359348+0000 mgr.vm05.wnsmpp (mgr.14195) 301 : audit [DBG] from='client.14732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:34.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:34 vm05 bash[17520]: audit 2026-03-10T07:22:33.359348+0000 mgr.vm05.wnsmpp (mgr.14195) 301 : audit [DBG] from='client.14732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:34.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:34 vm05 bash[17520]: audit 2026-03-10T07:22:33.740665+0000 mon.vm05 (mon.0) 904 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:34 vm05 bash[17520]: audit 2026-03-10T07:22:33.740665+0000 mon.vm05 (mon.0) 904 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:34 vm05 bash[17520]: audit 2026-03-10T07:22:33.747220+0000 mon.vm05 (mon.0) 905 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:34 vm05 bash[17520]: audit 2026-03-10T07:22:33.747220+0000 mon.vm05 (mon.0) 905 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:34 vm05 bash[17520]: audit 2026-03-10T07:22:34.179186+0000 mon.vm05 (mon.0) 906 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:34 vm05 bash[17520]: audit 2026-03-10T07:22:34.179186+0000 mon.vm05 (mon.0) 906 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:34 vm05 bash[17520]: audit 2026-03-10T07:22:34.227996+0000 mon.vm05 (mon.0) 907 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:34 vm05 bash[17520]: audit 2026-03-10T07:22:34.227996+0000 mon.vm05 (mon.0) 907 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:34.759 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled to stop haproxy.nfs.foo.vm09.etnbzh on host 'vm09' 2026-03-10T07:22:35.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: cluster 2026-03-10T07:22:34.285778+0000 mgr.vm05.wnsmpp (mgr.14195) 302 : cluster [DBG] pgmap v166: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: cluster 2026-03-10T07:22:34.285778+0000 mgr.vm05.wnsmpp (mgr.14195) 302 : cluster [DBG] pgmap v166: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.545595+0000 mon.vm05 (mon.0) 908 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.545595+0000 mon.vm05 (mon.0) 908 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.546425+0000 mon.vm05 (mon.0) 909 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.546425+0000 mon.vm05 (mon.0) 909 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.552223+0000 mon.vm05 (mon.0) 910 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.552223+0000 mon.vm05 (mon.0) 910 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.553589+0000 mon.vm05 (mon.0) 911 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.553589+0000 mon.vm05 (mon.0) 911 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.562364+0000 mon.vm05 (mon.0) 912 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.562364+0000 mon.vm05 (mon.0) 912 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.752623+0000 mon.vm05 (mon.0) 913 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.752623+0000 mon.vm05 (mon.0) 913 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.758756+0000 mon.vm05 (mon.0) 914 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.758756+0000 mon.vm05 (mon.0) 914 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.760143+0000 mon.vm05 (mon.0) 915 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:35.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:35 vm09 bash[21099]: audit 2026-03-10T07:22:34.760143+0000 mon.vm05 (mon.0) 915 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: cluster 2026-03-10T07:22:34.285778+0000 mgr.vm05.wnsmpp (mgr.14195) 302 : cluster [DBG] pgmap v166: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: cluster 2026-03-10T07:22:34.285778+0000 mgr.vm05.wnsmpp (mgr.14195) 302 : cluster [DBG] pgmap v166: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.545595+0000 mon.vm05 (mon.0) 908 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.545595+0000 mon.vm05 (mon.0) 908 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.546425+0000 mon.vm05 (mon.0) 909 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.546425+0000 mon.vm05 (mon.0) 909 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.552223+0000 mon.vm05 (mon.0) 910 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.552223+0000 mon.vm05 (mon.0) 910 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.553589+0000 mon.vm05 (mon.0) 911 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.553589+0000 mon.vm05 (mon.0) 911 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.562364+0000 mon.vm05 (mon.0) 912 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.562364+0000 mon.vm05 (mon.0) 912 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.752623+0000 mon.vm05 (mon.0) 913 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.752623+0000 mon.vm05 (mon.0) 913 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.758756+0000 mon.vm05 (mon.0) 914 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.758756+0000 mon.vm05 (mon.0) 914 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.760143+0000 mon.vm05 (mon.0) 915 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:35 vm05 bash[17520]: audit 2026-03-10T07:22:34.760143+0000 mon.vm05 (mon.0) 915 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:36 vm05 bash[17520]: audit 2026-03-10T07:22:34.572625+0000 mgr.vm05.wnsmpp (mgr.14195) 303 : audit [DBG] from='client.14736 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:36 vm05 bash[17520]: audit 2026-03-10T07:22:34.572625+0000 mgr.vm05.wnsmpp (mgr.14195) 303 : audit [DBG] from='client.14736 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:36 vm05 bash[17520]: audit 2026-03-10T07:22:34.745105+0000 mgr.vm05.wnsmpp (mgr.14195) 304 : audit [DBG] from='client.14740 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "haproxy.nfs.foo.vm09.etnbzh", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:36 vm05 bash[17520]: audit 2026-03-10T07:22:34.745105+0000 mgr.vm05.wnsmpp (mgr.14195) 304 : audit [DBG] from='client.14740 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "haproxy.nfs.foo.vm09.etnbzh", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:36 vm05 bash[17520]: cephadm 2026-03-10T07:22:34.745481+0000 mgr.vm05.wnsmpp (mgr.14195) 305 : cephadm [INF] Schedule stop daemon haproxy.nfs.foo.vm09.etnbzh 2026-03-10T07:22:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:36 vm05 bash[17520]: cephadm 2026-03-10T07:22:34.745481+0000 mgr.vm05.wnsmpp (mgr.14195) 305 : cephadm [INF] Schedule stop daemon haproxy.nfs.foo.vm09.etnbzh 2026-03-10T07:22:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:36 vm05 bash[17520]: audit 2026-03-10T07:22:34.966738+0000 mgr.vm05.wnsmpp (mgr.14195) 306 : audit [DBG] from='client.14744 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:36.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:36 vm05 bash[17520]: audit 2026-03-10T07:22:34.966738+0000 mgr.vm05.wnsmpp (mgr.14195) 306 : audit [DBG] from='client.14744 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:36.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:36 vm09 bash[21099]: audit 2026-03-10T07:22:34.572625+0000 mgr.vm05.wnsmpp (mgr.14195) 303 : audit [DBG] from='client.14736 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:36.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:36 vm09 bash[21099]: audit 2026-03-10T07:22:34.572625+0000 mgr.vm05.wnsmpp (mgr.14195) 303 : audit [DBG] from='client.14736 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:36.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:36 vm09 bash[21099]: audit 2026-03-10T07:22:34.745105+0000 mgr.vm05.wnsmpp (mgr.14195) 304 : audit [DBG] from='client.14740 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "haproxy.nfs.foo.vm09.etnbzh", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:36.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:36 vm09 bash[21099]: audit 2026-03-10T07:22:34.745105+0000 mgr.vm05.wnsmpp (mgr.14195) 304 : audit [DBG] from='client.14740 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "haproxy.nfs.foo.vm09.etnbzh", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:36.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:36 vm09 bash[21099]: cephadm 2026-03-10T07:22:34.745481+0000 mgr.vm05.wnsmpp (mgr.14195) 305 : cephadm [INF] Schedule stop daemon haproxy.nfs.foo.vm09.etnbzh 2026-03-10T07:22:36.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:36 vm09 bash[21099]: cephadm 2026-03-10T07:22:34.745481+0000 mgr.vm05.wnsmpp (mgr.14195) 305 : cephadm [INF] Schedule stop daemon haproxy.nfs.foo.vm09.etnbzh 2026-03-10T07:22:36.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:36 vm09 bash[21099]: audit 2026-03-10T07:22:34.966738+0000 mgr.vm05.wnsmpp (mgr.14195) 306 : audit [DBG] from='client.14744 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:36.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:36 vm09 bash[21099]: audit 2026-03-10T07:22:34.966738+0000 mgr.vm05.wnsmpp (mgr.14195) 306 : audit [DBG] from='client.14744 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:37 vm05 bash[17520]: audit 2026-03-10T07:22:36.144003+0000 mgr.vm05.wnsmpp (mgr.14195) 307 : audit [DBG] from='client.14748 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:37 vm05 bash[17520]: audit 2026-03-10T07:22:36.144003+0000 mgr.vm05.wnsmpp (mgr.14195) 307 : audit [DBG] from='client.14748 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:37 vm05 bash[17520]: cluster 2026-03-10T07:22:36.286156+0000 mgr.vm05.wnsmpp (mgr.14195) 308 : cluster [DBG] pgmap v167: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:37 vm05 bash[17520]: cluster 2026-03-10T07:22:36.286156+0000 mgr.vm05.wnsmpp (mgr.14195) 308 : cluster [DBG] pgmap v167: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:37.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:37 vm09 bash[21099]: audit 2026-03-10T07:22:36.144003+0000 mgr.vm05.wnsmpp (mgr.14195) 307 : audit [DBG] from='client.14748 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:37.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:37 vm09 bash[21099]: audit 2026-03-10T07:22:36.144003+0000 mgr.vm05.wnsmpp (mgr.14195) 307 : audit [DBG] from='client.14748 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:37.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:37 vm09 bash[21099]: cluster 2026-03-10T07:22:36.286156+0000 mgr.vm05.wnsmpp (mgr.14195) 308 : cluster [DBG] pgmap v167: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:37.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:37 vm09 bash[21099]: cluster 2026-03-10T07:22:36.286156+0000 mgr.vm05.wnsmpp (mgr.14195) 308 : cluster [DBG] pgmap v167: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:38.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:38 vm05 bash[17520]: audit 2026-03-10T07:22:37.309771+0000 mgr.vm05.wnsmpp (mgr.14195) 309 : audit [DBG] from='client.14752 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:38.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:38 vm05 bash[17520]: audit 2026-03-10T07:22:37.309771+0000 mgr.vm05.wnsmpp (mgr.14195) 309 : audit [DBG] from='client.14752 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:38.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:38 vm09 bash[21099]: audit 2026-03-10T07:22:37.309771+0000 mgr.vm05.wnsmpp (mgr.14195) 309 : audit [DBG] from='client.14752 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:38.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:38 vm09 bash[21099]: audit 2026-03-10T07:22:37.309771+0000 mgr.vm05.wnsmpp (mgr.14195) 309 : audit [DBG] from='client.14752 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:39.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:39 vm05 bash[17520]: cluster 2026-03-10T07:22:38.286585+0000 mgr.vm05.wnsmpp (mgr.14195) 310 : cluster [DBG] pgmap v168: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:39.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:39 vm05 bash[17520]: cluster 2026-03-10T07:22:38.286585+0000 mgr.vm05.wnsmpp (mgr.14195) 310 : cluster [DBG] pgmap v168: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:39.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:39 vm09 bash[21099]: cluster 2026-03-10T07:22:38.286585+0000 mgr.vm05.wnsmpp (mgr.14195) 310 : cluster [DBG] pgmap v168: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:39.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:39 vm09 bash[21099]: cluster 2026-03-10T07:22:38.286585+0000 mgr.vm05.wnsmpp (mgr.14195) 310 : cluster [DBG] pgmap v168: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:38.477437+0000 mgr.vm05.wnsmpp (mgr.14195) 311 : audit [DBG] from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:38.477437+0000 mgr.vm05.wnsmpp (mgr.14195) 311 : audit [DBG] from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.333189+0000 mon.vm05 (mon.0) 916 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.333189+0000 mon.vm05 (mon.0) 916 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.338907+0000 mon.vm05 (mon.0) 917 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.338907+0000 mon.vm05 (mon.0) 917 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.339880+0000 mon.vm05 (mon.0) 918 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.339880+0000 mon.vm05 (mon.0) 918 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.340369+0000 mon.vm05 (mon.0) 919 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.340369+0000 mon.vm05 (mon.0) 919 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.344172+0000 mon.vm05 (mon.0) 920 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.344172+0000 mon.vm05 (mon.0) 920 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.345491+0000 mon.vm05 (mon.0) 921 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.345491+0000 mon.vm05 (mon.0) 921 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.351295+0000 mon.vm05 (mon.0) 922 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:40 vm05 bash[17520]: audit 2026-03-10T07:22:40.351295+0000 mon.vm05 (mon.0) 922 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:38.477437+0000 mgr.vm05.wnsmpp (mgr.14195) 311 : audit [DBG] from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:38.477437+0000 mgr.vm05.wnsmpp (mgr.14195) 311 : audit [DBG] from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.333189+0000 mon.vm05 (mon.0) 916 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.333189+0000 mon.vm05 (mon.0) 916 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.338907+0000 mon.vm05 (mon.0) 917 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.338907+0000 mon.vm05 (mon.0) 917 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.339880+0000 mon.vm05 (mon.0) 918 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.339880+0000 mon.vm05 (mon.0) 918 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.340369+0000 mon.vm05 (mon.0) 919 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.340369+0000 mon.vm05 (mon.0) 919 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.344172+0000 mon.vm05 (mon.0) 920 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.344172+0000 mon.vm05 (mon.0) 920 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.345491+0000 mon.vm05 (mon.0) 921 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.345491+0000 mon.vm05 (mon.0) 921 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.351295+0000 mon.vm05 (mon.0) 922 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:40.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:40 vm09 bash[21099]: audit 2026-03-10T07:22:40.351295+0000 mon.vm05 (mon.0) 922 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:41.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:41 vm05 bash[17520]: audit 2026-03-10T07:22:39.655529+0000 mgr.vm05.wnsmpp (mgr.14195) 312 : audit [DBG] from='client.14760 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:41.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:41 vm05 bash[17520]: audit 2026-03-10T07:22:39.655529+0000 mgr.vm05.wnsmpp (mgr.14195) 312 : audit [DBG] from='client.14760 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:41.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:41 vm05 bash[17520]: cluster 2026-03-10T07:22:40.286998+0000 mgr.vm05.wnsmpp (mgr.14195) 313 : cluster [DBG] pgmap v169: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:41.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:41 vm05 bash[17520]: cluster 2026-03-10T07:22:40.286998+0000 mgr.vm05.wnsmpp (mgr.14195) 313 : cluster [DBG] pgmap v169: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:41.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:41 vm09 bash[21099]: audit 2026-03-10T07:22:39.655529+0000 mgr.vm05.wnsmpp (mgr.14195) 312 : audit [DBG] from='client.14760 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:41.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:41 vm09 bash[21099]: audit 2026-03-10T07:22:39.655529+0000 mgr.vm05.wnsmpp (mgr.14195) 312 : audit [DBG] from='client.14760 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:41.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:41 vm09 bash[21099]: cluster 2026-03-10T07:22:40.286998+0000 mgr.vm05.wnsmpp (mgr.14195) 313 : cluster [DBG] pgmap v169: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:41.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:41 vm09 bash[21099]: cluster 2026-03-10T07:22:40.286998+0000 mgr.vm05.wnsmpp (mgr.14195) 313 : cluster [DBG] pgmap v169: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:22:42.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:42 vm05 bash[17520]: audit 2026-03-10T07:22:40.822956+0000 mgr.vm05.wnsmpp (mgr.14195) 314 : audit [DBG] from='client.14764 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:42.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:42 vm05 bash[17520]: audit 2026-03-10T07:22:40.822956+0000 mgr.vm05.wnsmpp (mgr.14195) 314 : audit [DBG] from='client.14764 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:42.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:42 vm09 bash[21099]: audit 2026-03-10T07:22:40.822956+0000 mgr.vm05.wnsmpp (mgr.14195) 314 : audit [DBG] from='client.14764 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:42.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:42 vm09 bash[21099]: audit 2026-03-10T07:22:40.822956+0000 mgr.vm05.wnsmpp (mgr.14195) 314 : audit [DBG] from='client.14764 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:43.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:43 vm05 bash[17520]: audit 2026-03-10T07:22:41.981265+0000 mgr.vm05.wnsmpp (mgr.14195) 315 : audit [DBG] from='client.14768 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:43.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:43 vm05 bash[17520]: audit 2026-03-10T07:22:41.981265+0000 mgr.vm05.wnsmpp (mgr.14195) 315 : audit [DBG] from='client.14768 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:43.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:43 vm05 bash[17520]: cluster 2026-03-10T07:22:42.287339+0000 mgr.vm05.wnsmpp (mgr.14195) 316 : cluster [DBG] pgmap v170: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:43.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:43 vm05 bash[17520]: cluster 2026-03-10T07:22:42.287339+0000 mgr.vm05.wnsmpp (mgr.14195) 316 : cluster [DBG] pgmap v170: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:43.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:43 vm05 bash[17520]: audit 2026-03-10T07:22:42.676850+0000 mon.vm05 (mon.0) 923 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:43.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:43 vm05 bash[17520]: audit 2026-03-10T07:22:42.676850+0000 mon.vm05 (mon.0) 923 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:43.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:43 vm09 bash[21099]: audit 2026-03-10T07:22:41.981265+0000 mgr.vm05.wnsmpp (mgr.14195) 315 : audit [DBG] from='client.14768 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:43.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:43 vm09 bash[21099]: audit 2026-03-10T07:22:41.981265+0000 mgr.vm05.wnsmpp (mgr.14195) 315 : audit [DBG] from='client.14768 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:43.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:43 vm09 bash[21099]: cluster 2026-03-10T07:22:42.287339+0000 mgr.vm05.wnsmpp (mgr.14195) 316 : cluster [DBG] pgmap v170: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:43.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:43 vm09 bash[21099]: cluster 2026-03-10T07:22:42.287339+0000 mgr.vm05.wnsmpp (mgr.14195) 316 : cluster [DBG] pgmap v170: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:43.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:43 vm09 bash[21099]: audit 2026-03-10T07:22:42.676850+0000 mon.vm05 (mon.0) 923 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:43.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:43 vm09 bash[21099]: audit 2026-03-10T07:22:42.676850+0000 mon.vm05 (mon.0) 923 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:44.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:44 vm05 bash[17520]: audit 2026-03-10T07:22:43.146777+0000 mgr.vm05.wnsmpp (mgr.14195) 317 : audit [DBG] from='client.14772 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:44.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:44 vm05 bash[17520]: audit 2026-03-10T07:22:43.146777+0000 mgr.vm05.wnsmpp (mgr.14195) 317 : audit [DBG] from='client.14772 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:44.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:44 vm09 bash[21099]: audit 2026-03-10T07:22:43.146777+0000 mgr.vm05.wnsmpp (mgr.14195) 317 : audit [DBG] from='client.14772 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:44.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:44 vm09 bash[21099]: audit 2026-03-10T07:22:43.146777+0000 mgr.vm05.wnsmpp (mgr.14195) 317 : audit [DBG] from='client.14772 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:45 vm05 bash[17520]: cluster 2026-03-10T07:22:44.287713+0000 mgr.vm05.wnsmpp (mgr.14195) 318 : cluster [DBG] pgmap v171: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:45 vm05 bash[17520]: cluster 2026-03-10T07:22:44.287713+0000 mgr.vm05.wnsmpp (mgr.14195) 318 : cluster [DBG] pgmap v171: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:45 vm05 bash[17520]: audit 2026-03-10T07:22:44.310419+0000 mgr.vm05.wnsmpp (mgr.14195) 319 : audit [DBG] from='client.14776 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:45 vm05 bash[17520]: audit 2026-03-10T07:22:44.310419+0000 mgr.vm05.wnsmpp (mgr.14195) 319 : audit [DBG] from='client.14776 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:45 vm05 bash[17520]: audit 2026-03-10T07:22:45.291465+0000 mon.vm05 (mon.0) 924 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:45 vm05 bash[17520]: audit 2026-03-10T07:22:45.291465+0000 mon.vm05 (mon.0) 924 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:45 vm05 bash[17520]: audit 2026-03-10T07:22:45.295848+0000 mon.vm05 (mon.0) 925 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:45 vm05 bash[17520]: audit 2026-03-10T07:22:45.295848+0000 mon.vm05 (mon.0) 925 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:45 vm05 bash[17520]: audit 2026-03-10T07:22:45.324368+0000 mon.vm05 (mon.0) 926 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:45.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:45 vm05 bash[17520]: audit 2026-03-10T07:22:45.324368+0000 mon.vm05 (mon.0) 926 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:45.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:45 vm09 bash[21099]: cluster 2026-03-10T07:22:44.287713+0000 mgr.vm05.wnsmpp (mgr.14195) 318 : cluster [DBG] pgmap v171: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:45.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:45 vm09 bash[21099]: cluster 2026-03-10T07:22:44.287713+0000 mgr.vm05.wnsmpp (mgr.14195) 318 : cluster [DBG] pgmap v171: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:45.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:45 vm09 bash[21099]: audit 2026-03-10T07:22:44.310419+0000 mgr.vm05.wnsmpp (mgr.14195) 319 : audit [DBG] from='client.14776 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:45.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:45 vm09 bash[21099]: audit 2026-03-10T07:22:44.310419+0000 mgr.vm05.wnsmpp (mgr.14195) 319 : audit [DBG] from='client.14776 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:45.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:45 vm09 bash[21099]: audit 2026-03-10T07:22:45.291465+0000 mon.vm05 (mon.0) 924 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:45.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:45 vm09 bash[21099]: audit 2026-03-10T07:22:45.291465+0000 mon.vm05 (mon.0) 924 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:45.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:45 vm09 bash[21099]: audit 2026-03-10T07:22:45.295848+0000 mon.vm05 (mon.0) 925 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:45.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:45 vm09 bash[21099]: audit 2026-03-10T07:22:45.295848+0000 mon.vm05 (mon.0) 925 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:45.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:45 vm09 bash[21099]: audit 2026-03-10T07:22:45.324368+0000 mon.vm05 (mon.0) 926 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:45.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:45 vm09 bash[21099]: audit 2026-03-10T07:22:45.324368+0000 mon.vm05 (mon.0) 926 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:22:47.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:47 vm09 bash[21099]: audit 2026-03-10T07:22:45.486720+0000 mgr.vm05.wnsmpp (mgr.14195) 320 : audit [DBG] from='client.14780 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:47.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:47 vm09 bash[21099]: audit 2026-03-10T07:22:45.486720+0000 mgr.vm05.wnsmpp (mgr.14195) 320 : audit [DBG] from='client.14780 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:47.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:47 vm09 bash[21099]: cluster 2026-03-10T07:22:46.288211+0000 mgr.vm05.wnsmpp (mgr.14195) 321 : cluster [DBG] pgmap v172: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:47.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:47 vm09 bash[21099]: cluster 2026-03-10T07:22:46.288211+0000 mgr.vm05.wnsmpp (mgr.14195) 321 : cluster [DBG] pgmap v172: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:47 vm05 bash[17520]: audit 2026-03-10T07:22:45.486720+0000 mgr.vm05.wnsmpp (mgr.14195) 320 : audit [DBG] from='client.14780 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:47 vm05 bash[17520]: audit 2026-03-10T07:22:45.486720+0000 mgr.vm05.wnsmpp (mgr.14195) 320 : audit [DBG] from='client.14780 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:47 vm05 bash[17520]: cluster 2026-03-10T07:22:46.288211+0000 mgr.vm05.wnsmpp (mgr.14195) 321 : cluster [DBG] pgmap v172: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:47.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:47 vm05 bash[17520]: cluster 2026-03-10T07:22:46.288211+0000 mgr.vm05.wnsmpp (mgr.14195) 321 : cluster [DBG] pgmap v172: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:48.806 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:48 vm09 bash[21099]: audit 2026-03-10T07:22:46.682869+0000 mgr.vm05.wnsmpp (mgr.14195) 322 : audit [DBG] from='client.14784 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:48.806 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:48 vm09 bash[21099]: audit 2026-03-10T07:22:46.682869+0000 mgr.vm05.wnsmpp (mgr.14195) 322 : audit [DBG] from='client.14784 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:48.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:48 vm05 bash[17520]: audit 2026-03-10T07:22:46.682869+0000 mgr.vm05.wnsmpp (mgr.14195) 322 : audit [DBG] from='client.14784 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:48.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:48 vm05 bash[17520]: audit 2026-03-10T07:22:46.682869+0000 mgr.vm05.wnsmpp (mgr.14195) 322 : audit [DBG] from='client.14784 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:49.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:49 vm09 bash[21099]: audit 2026-03-10T07:22:47.861344+0000 mgr.vm05.wnsmpp (mgr.14195) 323 : audit [DBG] from='client.14788 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:49.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:49 vm09 bash[21099]: audit 2026-03-10T07:22:47.861344+0000 mgr.vm05.wnsmpp (mgr.14195) 323 : audit [DBG] from='client.14788 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:49.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:49 vm09 bash[21099]: cluster 2026-03-10T07:22:48.288599+0000 mgr.vm05.wnsmpp (mgr.14195) 324 : cluster [DBG] pgmap v173: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:49.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:49 vm09 bash[21099]: cluster 2026-03-10T07:22:48.288599+0000 mgr.vm05.wnsmpp (mgr.14195) 324 : cluster [DBG] pgmap v173: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:49.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:49 vm09 bash[21099]: audit 2026-03-10T07:22:49.047843+0000 mgr.vm05.wnsmpp (mgr.14195) 325 : audit [DBG] from='client.14792 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:49.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:49 vm09 bash[21099]: audit 2026-03-10T07:22:49.047843+0000 mgr.vm05.wnsmpp (mgr.14195) 325 : audit [DBG] from='client.14792 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:49.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:49 vm05 bash[17520]: audit 2026-03-10T07:22:47.861344+0000 mgr.vm05.wnsmpp (mgr.14195) 323 : audit [DBG] from='client.14788 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:49.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:49 vm05 bash[17520]: audit 2026-03-10T07:22:47.861344+0000 mgr.vm05.wnsmpp (mgr.14195) 323 : audit [DBG] from='client.14788 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:49.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:49 vm05 bash[17520]: cluster 2026-03-10T07:22:48.288599+0000 mgr.vm05.wnsmpp (mgr.14195) 324 : cluster [DBG] pgmap v173: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:49.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:49 vm05 bash[17520]: cluster 2026-03-10T07:22:48.288599+0000 mgr.vm05.wnsmpp (mgr.14195) 324 : cluster [DBG] pgmap v173: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:49.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:49 vm05 bash[17520]: audit 2026-03-10T07:22:49.047843+0000 mgr.vm05.wnsmpp (mgr.14195) 325 : audit [DBG] from='client.14792 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:49.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:49 vm05 bash[17520]: audit 2026-03-10T07:22:49.047843+0000 mgr.vm05.wnsmpp (mgr.14195) 325 : audit [DBG] from='client.14792 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:50.265 INFO:teuthology.orchestra.run.vm05.stdout:haproxy.nfs.foo.vm09.etnbzh vm09 *:2049,9002 stopped 0s ago 115s - - 2026-03-10T07:22:50.266 INFO:teuthology.orchestra.run.vm05.stdout:haproxy.nfs.foo.vm05.yhprte 2026-03-10T07:22:50.550 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled to start haproxy.nfs.foo.vm09.etnbzh on host 'vm09' 2026-03-10T07:22:50.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:50 vm05 bash[17520]: audit 2026-03-10T07:22:49.685275+0000 mon.vm05 (mon.0) 927 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:50.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:50 vm05 bash[17520]: audit 2026-03-10T07:22:49.685275+0000 mon.vm05 (mon.0) 927 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:50.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:50 vm05 bash[17520]: audit 2026-03-10T07:22:49.692126+0000 mon.vm05 (mon.0) 928 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:50.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:50 vm05 bash[17520]: audit 2026-03-10T07:22:49.692126+0000 mon.vm05 (mon.0) 928 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:50.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:50 vm05 bash[17520]: audit 2026-03-10T07:22:50.248117+0000 mgr.vm05.wnsmpp (mgr.14195) 326 : audit [DBG] from='client.14796 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:50.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:50 vm05 bash[17520]: audit 2026-03-10T07:22:50.248117+0000 mgr.vm05.wnsmpp (mgr.14195) 326 : audit [DBG] from='client.14796 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:50.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:50 vm05 bash[17520]: cluster 2026-03-10T07:22:50.289042+0000 mgr.vm05.wnsmpp (mgr.14195) 327 : cluster [DBG] pgmap v174: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 0 op/s 2026-03-10T07:22:50.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:50 vm05 bash[17520]: cluster 2026-03-10T07:22:50.289042+0000 mgr.vm05.wnsmpp (mgr.14195) 327 : cluster [DBG] pgmap v174: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 0 op/s 2026-03-10T07:22:50.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:50 vm05 bash[17520]: audit 2026-03-10T07:22:50.542602+0000 mon.vm05 (mon.0) 929 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:50.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:50 vm05 bash[17520]: audit 2026-03-10T07:22:50.542602+0000 mon.vm05 (mon.0) 929 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:50.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:50 vm05 bash[17520]: audit 2026-03-10T07:22:50.548655+0000 mon.vm05 (mon.0) 930 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:50.962 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:50 vm05 bash[17520]: audit 2026-03-10T07:22:50.548655+0000 mon.vm05 (mon.0) 930 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:51.169 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:50 vm09 bash[21099]: audit 2026-03-10T07:22:49.685275+0000 mon.vm05 (mon.0) 927 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:51.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:50 vm09 bash[21099]: audit 2026-03-10T07:22:49.685275+0000 mon.vm05 (mon.0) 927 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:51.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:50 vm09 bash[21099]: audit 2026-03-10T07:22:49.692126+0000 mon.vm05 (mon.0) 928 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:51.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:50 vm09 bash[21099]: audit 2026-03-10T07:22:49.692126+0000 mon.vm05 (mon.0) 928 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:51.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:50 vm09 bash[21099]: audit 2026-03-10T07:22:50.248117+0000 mgr.vm05.wnsmpp (mgr.14195) 326 : audit [DBG] from='client.14796 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:51.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:50 vm09 bash[21099]: audit 2026-03-10T07:22:50.248117+0000 mgr.vm05.wnsmpp (mgr.14195) 326 : audit [DBG] from='client.14796 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:51.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:50 vm09 bash[21099]: cluster 2026-03-10T07:22:50.289042+0000 mgr.vm05.wnsmpp (mgr.14195) 327 : cluster [DBG] pgmap v174: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 0 op/s 2026-03-10T07:22:51.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:50 vm09 bash[21099]: cluster 2026-03-10T07:22:50.289042+0000 mgr.vm05.wnsmpp (mgr.14195) 327 : cluster [DBG] pgmap v174: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 0 op/s 2026-03-10T07:22:51.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:50 vm09 bash[21099]: audit 2026-03-10T07:22:50.542602+0000 mon.vm05 (mon.0) 929 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:51.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:50 vm09 bash[21099]: audit 2026-03-10T07:22:50.542602+0000 mon.vm05 (mon.0) 929 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:51.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:50 vm09 bash[21099]: audit 2026-03-10T07:22:50.548655+0000 mon.vm05 (mon.0) 930 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:51.170 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:50 vm09 bash[21099]: audit 2026-03-10T07:22:50.548655+0000 mon.vm05 (mon.0) 930 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:50.534209+0000 mgr.vm05.wnsmpp (mgr.14195) 328 : audit [DBG] from='client.24501 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "start", "name": "haproxy.nfs.foo.vm09.etnbzh", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:52.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:50.534209+0000 mgr.vm05.wnsmpp (mgr.14195) 328 : audit [DBG] from='client.24501 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "start", "name": "haproxy.nfs.foo.vm09.etnbzh", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:52.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: cephadm 2026-03-10T07:22:50.534780+0000 mgr.vm05.wnsmpp (mgr.14195) 329 : cephadm [INF] Schedule start daemon haproxy.nfs.foo.vm09.etnbzh 2026-03-10T07:22:52.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: cephadm 2026-03-10T07:22:50.534780+0000 mgr.vm05.wnsmpp (mgr.14195) 329 : cephadm [INF] Schedule start daemon haproxy.nfs.foo.vm09.etnbzh 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:50.777471+0000 mgr.vm05.wnsmpp (mgr.14195) 330 : audit [DBG] from='client.14802 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:50.777471+0000 mgr.vm05.wnsmpp (mgr.14195) 330 : audit [DBG] from='client.14802 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.178240+0000 mon.vm05 (mon.0) 931 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.178240+0000 mon.vm05 (mon.0) 931 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.185626+0000 mon.vm05 (mon.0) 932 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.185626+0000 mon.vm05 (mon.0) 932 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.187136+0000 mon.vm05 (mon.0) 933 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.187136+0000 mon.vm05 (mon.0) 933 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.187740+0000 mon.vm05 (mon.0) 934 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.187740+0000 mon.vm05 (mon.0) 934 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.192537+0000 mon.vm05 (mon.0) 935 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.192537+0000 mon.vm05 (mon.0) 935 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.194241+0000 mon.vm05 (mon.0) 936 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.194241+0000 mon.vm05 (mon.0) 936 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.201229+0000 mon.vm05 (mon.0) 937 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.462 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:52 vm05 bash[17520]: audit 2026-03-10T07:22:51.201229+0000 mon.vm05 (mon.0) 937 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:50.534209+0000 mgr.vm05.wnsmpp (mgr.14195) 328 : audit [DBG] from='client.24501 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "start", "name": "haproxy.nfs.foo.vm09.etnbzh", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:50.534209+0000 mgr.vm05.wnsmpp (mgr.14195) 328 : audit [DBG] from='client.24501 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "start", "name": "haproxy.nfs.foo.vm09.etnbzh", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: cephadm 2026-03-10T07:22:50.534780+0000 mgr.vm05.wnsmpp (mgr.14195) 329 : cephadm [INF] Schedule start daemon haproxy.nfs.foo.vm09.etnbzh 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: cephadm 2026-03-10T07:22:50.534780+0000 mgr.vm05.wnsmpp (mgr.14195) 329 : cephadm [INF] Schedule start daemon haproxy.nfs.foo.vm09.etnbzh 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:50.777471+0000 mgr.vm05.wnsmpp (mgr.14195) 330 : audit [DBG] from='client.14802 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:50.777471+0000 mgr.vm05.wnsmpp (mgr.14195) 330 : audit [DBG] from='client.14802 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.178240+0000 mon.vm05 (mon.0) 931 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.178240+0000 mon.vm05 (mon.0) 931 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.185626+0000 mon.vm05 (mon.0) 932 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.185626+0000 mon.vm05 (mon.0) 932 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.187136+0000 mon.vm05 (mon.0) 933 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.187136+0000 mon.vm05 (mon.0) 933 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.187740+0000 mon.vm05 (mon.0) 934 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.187740+0000 mon.vm05 (mon.0) 934 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.192537+0000 mon.vm05 (mon.0) 935 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.192537+0000 mon.vm05 (mon.0) 935 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.194241+0000 mon.vm05 (mon.0) 936 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.194241+0000 mon.vm05 (mon.0) 936 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.201229+0000 mon.vm05 (mon.0) 937 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:52.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:52 vm09 bash[21099]: audit 2026-03-10T07:22:51.201229+0000 mon.vm05 (mon.0) 937 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:22:53.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:53 vm05 bash[17520]: audit 2026-03-10T07:22:51.979267+0000 mgr.vm05.wnsmpp (mgr.14195) 331 : audit [DBG] from='client.14806 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:53.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:53 vm05 bash[17520]: audit 2026-03-10T07:22:51.979267+0000 mgr.vm05.wnsmpp (mgr.14195) 331 : audit [DBG] from='client.14806 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:53.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:53 vm05 bash[17520]: cluster 2026-03-10T07:22:52.289434+0000 mgr.vm05.wnsmpp (mgr.14195) 332 : cluster [DBG] pgmap v175: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:53.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:53 vm05 bash[17520]: cluster 2026-03-10T07:22:52.289434+0000 mgr.vm05.wnsmpp (mgr.14195) 332 : cluster [DBG] pgmap v175: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:53.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:53 vm09 bash[21099]: audit 2026-03-10T07:22:51.979267+0000 mgr.vm05.wnsmpp (mgr.14195) 331 : audit [DBG] from='client.14806 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:53.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:53 vm09 bash[21099]: audit 2026-03-10T07:22:51.979267+0000 mgr.vm05.wnsmpp (mgr.14195) 331 : audit [DBG] from='client.14806 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:53.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:53 vm09 bash[21099]: cluster 2026-03-10T07:22:52.289434+0000 mgr.vm05.wnsmpp (mgr.14195) 332 : cluster [DBG] pgmap v175: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:53.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:53 vm09 bash[21099]: cluster 2026-03-10T07:22:52.289434+0000 mgr.vm05.wnsmpp (mgr.14195) 332 : cluster [DBG] pgmap v175: 97 pgs: 97 active+clean; 469 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:22:54.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:54 vm05 bash[17520]: audit 2026-03-10T07:22:53.179461+0000 mgr.vm05.wnsmpp (mgr.14195) 333 : audit [DBG] from='client.14810 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:54.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:54 vm05 bash[17520]: audit 2026-03-10T07:22:53.179461+0000 mgr.vm05.wnsmpp (mgr.14195) 333 : audit [DBG] from='client.14810 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:54.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:54 vm09 bash[21099]: audit 2026-03-10T07:22:53.179461+0000 mgr.vm05.wnsmpp (mgr.14195) 333 : audit [DBG] from='client.14810 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:54.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:54 vm09 bash[21099]: audit 2026-03-10T07:22:53.179461+0000 mgr.vm05.wnsmpp (mgr.14195) 333 : audit [DBG] from='client.14810 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:55.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:55 vm05 bash[17520]: cluster 2026-03-10T07:22:54.289829+0000 mgr.vm05.wnsmpp (mgr.14195) 334 : cluster [DBG] pgmap v176: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:55.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:55 vm05 bash[17520]: cluster 2026-03-10T07:22:54.289829+0000 mgr.vm05.wnsmpp (mgr.14195) 334 : cluster [DBG] pgmap v176: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:55.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:55 vm05 bash[17520]: audit 2026-03-10T07:22:54.363423+0000 mgr.vm05.wnsmpp (mgr.14195) 335 : audit [DBG] from='client.14814 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:55.461 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:55 vm05 bash[17520]: audit 2026-03-10T07:22:54.363423+0000 mgr.vm05.wnsmpp (mgr.14195) 335 : audit [DBG] from='client.14814 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:55.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:55 vm09 bash[21099]: cluster 2026-03-10T07:22:54.289829+0000 mgr.vm05.wnsmpp (mgr.14195) 334 : cluster [DBG] pgmap v176: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:55.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:55 vm09 bash[21099]: cluster 2026-03-10T07:22:54.289829+0000 mgr.vm05.wnsmpp (mgr.14195) 334 : cluster [DBG] pgmap v176: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:55.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:55 vm09 bash[21099]: audit 2026-03-10T07:22:54.363423+0000 mgr.vm05.wnsmpp (mgr.14195) 335 : audit [DBG] from='client.14814 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:55.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:55 vm09 bash[21099]: audit 2026-03-10T07:22:54.363423+0000 mgr.vm05.wnsmpp (mgr.14195) 335 : audit [DBG] from='client.14814 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:57.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:57 vm09 bash[21099]: audit 2026-03-10T07:22:55.543294+0000 mgr.vm05.wnsmpp (mgr.14195) 336 : audit [DBG] from='client.14818 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:57.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:57 vm09 bash[21099]: audit 2026-03-10T07:22:55.543294+0000 mgr.vm05.wnsmpp (mgr.14195) 336 : audit [DBG] from='client.14818 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:57.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:57 vm09 bash[21099]: cluster 2026-03-10T07:22:56.290208+0000 mgr.vm05.wnsmpp (mgr.14195) 337 : cluster [DBG] pgmap v177: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:57.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:57 vm09 bash[21099]: cluster 2026-03-10T07:22:56.290208+0000 mgr.vm05.wnsmpp (mgr.14195) 337 : cluster [DBG] pgmap v177: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:57.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:57 vm05 bash[17520]: audit 2026-03-10T07:22:55.543294+0000 mgr.vm05.wnsmpp (mgr.14195) 336 : audit [DBG] from='client.14818 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:57.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:57 vm05 bash[17520]: audit 2026-03-10T07:22:55.543294+0000 mgr.vm05.wnsmpp (mgr.14195) 336 : audit [DBG] from='client.14818 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:57.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:57 vm05 bash[17520]: cluster 2026-03-10T07:22:56.290208+0000 mgr.vm05.wnsmpp (mgr.14195) 337 : cluster [DBG] pgmap v177: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:57.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:57 vm05 bash[17520]: cluster 2026-03-10T07:22:56.290208+0000 mgr.vm05.wnsmpp (mgr.14195) 337 : cluster [DBG] pgmap v177: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:58.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:58 vm09 bash[21099]: audit 2026-03-10T07:22:56.727198+0000 mgr.vm05.wnsmpp (mgr.14195) 338 : audit [DBG] from='client.14822 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:58.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:58 vm09 bash[21099]: audit 2026-03-10T07:22:56.727198+0000 mgr.vm05.wnsmpp (mgr.14195) 338 : audit [DBG] from='client.14822 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:58.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:58 vm09 bash[21099]: audit 2026-03-10T07:22:57.677406+0000 mon.vm05 (mon.0) 938 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:58.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:58 vm09 bash[21099]: audit 2026-03-10T07:22:57.677406+0000 mon.vm05 (mon.0) 938 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:58 vm05 bash[17520]: audit 2026-03-10T07:22:56.727198+0000 mgr.vm05.wnsmpp (mgr.14195) 338 : audit [DBG] from='client.14822 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:58 vm05 bash[17520]: audit 2026-03-10T07:22:56.727198+0000 mgr.vm05.wnsmpp (mgr.14195) 338 : audit [DBG] from='client.14822 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:58 vm05 bash[17520]: audit 2026-03-10T07:22:57.677406+0000 mon.vm05 (mon.0) 938 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:58.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:58 vm05 bash[17520]: audit 2026-03-10T07:22:57.677406+0000 mon.vm05 (mon.0) 938 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:22:59.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:59 vm09 bash[21099]: audit 2026-03-10T07:22:57.906677+0000 mgr.vm05.wnsmpp (mgr.14195) 339 : audit [DBG] from='client.14826 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:59.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:59 vm09 bash[21099]: audit 2026-03-10T07:22:57.906677+0000 mgr.vm05.wnsmpp (mgr.14195) 339 : audit [DBG] from='client.14826 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:59.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:59 vm09 bash[21099]: cluster 2026-03-10T07:22:58.290600+0000 mgr.vm05.wnsmpp (mgr.14195) 340 : cluster [DBG] pgmap v178: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:59.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:22:59 vm09 bash[21099]: cluster 2026-03-10T07:22:58.290600+0000 mgr.vm05.wnsmpp (mgr.14195) 340 : cluster [DBG] pgmap v178: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:59 vm05 bash[17520]: audit 2026-03-10T07:22:57.906677+0000 mgr.vm05.wnsmpp (mgr.14195) 339 : audit [DBG] from='client.14826 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:59 vm05 bash[17520]: audit 2026-03-10T07:22:57.906677+0000 mgr.vm05.wnsmpp (mgr.14195) 339 : audit [DBG] from='client.14826 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:22:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:59 vm05 bash[17520]: cluster 2026-03-10T07:22:58.290600+0000 mgr.vm05.wnsmpp (mgr.14195) 340 : cluster [DBG] pgmap v178: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:22:59.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:22:59 vm05 bash[17520]: cluster 2026-03-10T07:22:58.290600+0000 mgr.vm05.wnsmpp (mgr.14195) 340 : cluster [DBG] pgmap v178: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 0 op/s 2026-03-10T07:23:00.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:00 vm09 bash[21099]: audit 2026-03-10T07:22:59.079680+0000 mgr.vm05.wnsmpp (mgr.14195) 341 : audit [DBG] from='client.14830 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:00.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:00 vm09 bash[21099]: audit 2026-03-10T07:22:59.079680+0000 mgr.vm05.wnsmpp (mgr.14195) 341 : audit [DBG] from='client.14830 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:00.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:00 vm09 bash[21099]: audit 2026-03-10T07:23:00.112477+0000 mon.vm05 (mon.0) 939 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:00.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:00 vm09 bash[21099]: audit 2026-03-10T07:23:00.112477+0000 mon.vm05 (mon.0) 939 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:00.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:00 vm09 bash[21099]: audit 2026-03-10T07:23:00.118787+0000 mon.vm05 (mon.0) 940 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:00.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:00 vm09 bash[21099]: audit 2026-03-10T07:23:00.118787+0000 mon.vm05 (mon.0) 940 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:00.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:00 vm09 bash[21099]: audit 2026-03-10T07:23:00.151692+0000 mon.vm05 (mon.0) 941 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:00.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:00 vm09 bash[21099]: audit 2026-03-10T07:23:00.151692+0000 mon.vm05 (mon.0) 941 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:00.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:00 vm05 bash[17520]: audit 2026-03-10T07:22:59.079680+0000 mgr.vm05.wnsmpp (mgr.14195) 341 : audit [DBG] from='client.14830 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:00.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:00 vm05 bash[17520]: audit 2026-03-10T07:22:59.079680+0000 mgr.vm05.wnsmpp (mgr.14195) 341 : audit [DBG] from='client.14830 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:00.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:00 vm05 bash[17520]: audit 2026-03-10T07:23:00.112477+0000 mon.vm05 (mon.0) 939 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:00.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:00 vm05 bash[17520]: audit 2026-03-10T07:23:00.112477+0000 mon.vm05 (mon.0) 939 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:00.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:00 vm05 bash[17520]: audit 2026-03-10T07:23:00.118787+0000 mon.vm05 (mon.0) 940 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:00.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:00 vm05 bash[17520]: audit 2026-03-10T07:23:00.118787+0000 mon.vm05 (mon.0) 940 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:00.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:00 vm05 bash[17520]: audit 2026-03-10T07:23:00.151692+0000 mon.vm05 (mon.0) 941 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:00.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:00 vm05 bash[17520]: audit 2026-03-10T07:23:00.151692+0000 mon.vm05 (mon.0) 941 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:23:01.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:01 vm09 bash[21099]: audit 2026-03-10T07:23:00.275772+0000 mgr.vm05.wnsmpp (mgr.14195) 342 : audit [DBG] from='client.14834 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:01.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:01 vm09 bash[21099]: audit 2026-03-10T07:23:00.275772+0000 mgr.vm05.wnsmpp (mgr.14195) 342 : audit [DBG] from='client.14834 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:01.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:01 vm09 bash[21099]: cluster 2026-03-10T07:23:00.290956+0000 mgr.vm05.wnsmpp (mgr.14195) 343 : cluster [DBG] pgmap v179: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:23:01.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:01 vm09 bash[21099]: cluster 2026-03-10T07:23:00.290956+0000 mgr.vm05.wnsmpp (mgr.14195) 343 : cluster [DBG] pgmap v179: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:23:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:01 vm05 bash[17520]: audit 2026-03-10T07:23:00.275772+0000 mgr.vm05.wnsmpp (mgr.14195) 342 : audit [DBG] from='client.14834 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:01 vm05 bash[17520]: audit 2026-03-10T07:23:00.275772+0000 mgr.vm05.wnsmpp (mgr.14195) 342 : audit [DBG] from='client.14834 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:01 vm05 bash[17520]: cluster 2026-03-10T07:23:00.290956+0000 mgr.vm05.wnsmpp (mgr.14195) 343 : cluster [DBG] pgmap v179: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:23:01.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:01 vm05 bash[17520]: cluster 2026-03-10T07:23:00.290956+0000 mgr.vm05.wnsmpp (mgr.14195) 343 : cluster [DBG] pgmap v179: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:23:02.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:02 vm05 bash[17520]: audit 2026-03-10T07:23:01.459418+0000 mgr.vm05.wnsmpp (mgr.14195) 344 : audit [DBG] from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:02.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:02 vm05 bash[17520]: audit 2026-03-10T07:23:01.459418+0000 mgr.vm05.wnsmpp (mgr.14195) 344 : audit [DBG] from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:02.806 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:02 vm09 bash[21099]: audit 2026-03-10T07:23:01.459418+0000 mgr.vm05.wnsmpp (mgr.14195) 344 : audit [DBG] from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:02.806 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:02 vm09 bash[21099]: audit 2026-03-10T07:23:01.459418+0000 mgr.vm05.wnsmpp (mgr.14195) 344 : audit [DBG] from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:03.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:03 vm05 bash[17520]: cluster 2026-03-10T07:23:02.291386+0000 mgr.vm05.wnsmpp (mgr.14195) 345 : cluster [DBG] pgmap v180: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:23:03.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:03 vm05 bash[17520]: cluster 2026-03-10T07:23:02.291386+0000 mgr.vm05.wnsmpp (mgr.14195) 345 : cluster [DBG] pgmap v180: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:23:03.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:03 vm09 bash[21099]: cluster 2026-03-10T07:23:02.291386+0000 mgr.vm05.wnsmpp (mgr.14195) 345 : cluster [DBG] pgmap v180: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:23:03.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:03 vm09 bash[21099]: cluster 2026-03-10T07:23:02.291386+0000 mgr.vm05.wnsmpp (mgr.14195) 345 : cluster [DBG] pgmap v180: 97 pgs: 97 active+clean; 478 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T07:23:04.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:04 vm05 bash[17520]: audit 2026-03-10T07:23:02.649576+0000 mgr.vm05.wnsmpp (mgr.14195) 346 : audit [DBG] from='client.14842 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:04.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:04 vm05 bash[17520]: audit 2026-03-10T07:23:02.649576+0000 mgr.vm05.wnsmpp (mgr.14195) 346 : audit [DBG] from='client.14842 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:04.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:04 vm09 bash[21099]: audit 2026-03-10T07:23:02.649576+0000 mgr.vm05.wnsmpp (mgr.14195) 346 : audit [DBG] from='client.14842 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:04.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:04 vm09 bash[21099]: audit 2026-03-10T07:23:02.649576+0000 mgr.vm05.wnsmpp (mgr.14195) 346 : audit [DBG] from='client.14842 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:05.033 INFO:teuthology.orchestra.run.vm05.stdout:haproxy.nfs.foo.vm09.etnbzh vm09 *:2049,9002 running (4s) 0s ago 2m 3487k - 2.3.17-d1c9119 e85424b0d443 5869cf6cb4b1 2026-03-10T07:23:05.165 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T07:23:05.167 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-10T07:23:05.167 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'stat -c '"'"'%u %g'"'"' /var/log/ceph | grep '"'"'167 167'"'"'' 2026-03-10T07:23:05.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:03.827400+0000 mgr.vm05.wnsmpp (mgr.14195) 347 : audit [DBG] from='client.14846 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:05.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:03.827400+0000 mgr.vm05.wnsmpp (mgr.14195) 347 : audit [DBG] from='client.14846 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:05.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: cluster 2026-03-10T07:23:04.291869+0000 mgr.vm05.wnsmpp (mgr.14195) 348 : cluster [DBG] pgmap v181: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.3 KiB/s wr, 1 op/s 2026-03-10T07:23:05.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: cluster 2026-03-10T07:23:04.291869+0000 mgr.vm05.wnsmpp (mgr.14195) 348 : cluster [DBG] pgmap v181: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.3 KiB/s wr, 1 op/s 2026-03-10T07:23:05.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.621648+0000 mon.vm05 (mon.0) 942 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.621648+0000 mon.vm05 (mon.0) 942 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.626863+0000 mon.vm05 (mon.0) 943 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.626863+0000 mon.vm05 (mon.0) 943 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.627850+0000 mon.vm05 (mon.0) 944 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.627850+0000 mon.vm05 (mon.0) 944 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.628623+0000 mon.vm05 (mon.0) 945 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.628623+0000 mon.vm05 (mon.0) 945 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.633647+0000 mon.vm05 (mon.0) 946 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.633647+0000 mon.vm05 (mon.0) 946 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.635977+0000 mon.vm05 (mon.0) 947 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.635977+0000 mon.vm05 (mon.0) 947 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.644087+0000 mon.vm05 (mon.0) 948 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.712 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:05 vm05 bash[17520]: audit 2026-03-10T07:23:04.644087+0000 mon.vm05 (mon.0) 948 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:03.827400+0000 mgr.vm05.wnsmpp (mgr.14195) 347 : audit [DBG] from='client.14846 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:03.827400+0000 mgr.vm05.wnsmpp (mgr.14195) 347 : audit [DBG] from='client.14846 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: cluster 2026-03-10T07:23:04.291869+0000 mgr.vm05.wnsmpp (mgr.14195) 348 : cluster [DBG] pgmap v181: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.3 KiB/s wr, 1 op/s 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: cluster 2026-03-10T07:23:04.291869+0000 mgr.vm05.wnsmpp (mgr.14195) 348 : cluster [DBG] pgmap v181: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 1.3 KiB/s wr, 1 op/s 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.621648+0000 mon.vm05 (mon.0) 942 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.621648+0000 mon.vm05 (mon.0) 942 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.626863+0000 mon.vm05 (mon.0) 943 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.626863+0000 mon.vm05 (mon.0) 943 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.627850+0000 mon.vm05 (mon.0) 944 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.627850+0000 mon.vm05 (mon.0) 944 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.628623+0000 mon.vm05 (mon.0) 945 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.628623+0000 mon.vm05 (mon.0) 945 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.633647+0000 mon.vm05 (mon.0) 946 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.633647+0000 mon.vm05 (mon.0) 946 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.635977+0000 mon.vm05 (mon.0) 947 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.635977+0000 mon.vm05 (mon.0) 947 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.644087+0000 mon.vm05 (mon.0) 948 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:05.772 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:05 vm09 bash[21099]: audit 2026-03-10T07:23:04.644087+0000 mon.vm05 (mon.0) 948 : audit [INF] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' 2026-03-10T07:23:06.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:06 vm05 bash[17520]: audit 2026-03-10T07:23:05.017737+0000 mgr.vm05.wnsmpp (mgr.14195) 349 : audit [DBG] from='client.14850 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:06.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:06 vm05 bash[17520]: audit 2026-03-10T07:23:05.017737+0000 mgr.vm05.wnsmpp (mgr.14195) 349 : audit [DBG] from='client.14850 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:06.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:06 vm09 bash[21099]: audit 2026-03-10T07:23:05.017737+0000 mgr.vm05.wnsmpp (mgr.14195) 349 : audit [DBG] from='client.14850 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:06.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:06 vm09 bash[21099]: audit 2026-03-10T07:23:05.017737+0000 mgr.vm05.wnsmpp (mgr.14195) 349 : audit [DBG] from='client.14850 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:07.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:07 vm09 bash[21099]: cluster 2026-03-10T07:23:06.292236+0000 mgr.vm05.wnsmpp (mgr.14195) 350 : cluster [DBG] pgmap v182: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:07.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:07 vm09 bash[21099]: cluster 2026-03-10T07:23:06.292236+0000 mgr.vm05.wnsmpp (mgr.14195) 350 : cluster [DBG] pgmap v182: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:07.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:07 vm05 bash[17520]: cluster 2026-03-10T07:23:06.292236+0000 mgr.vm05.wnsmpp (mgr.14195) 350 : cluster [DBG] pgmap v182: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:07.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:07 vm05 bash[17520]: cluster 2026-03-10T07:23:06.292236+0000 mgr.vm05.wnsmpp (mgr.14195) 350 : cluster [DBG] pgmap v182: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:08.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:08 vm09 bash[21099]: cluster 2026-03-10T07:23:08.293409+0000 mgr.vm05.wnsmpp (mgr.14195) 351 : cluster [DBG] pgmap v183: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:08.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:08 vm09 bash[21099]: cluster 2026-03-10T07:23:08.293409+0000 mgr.vm05.wnsmpp (mgr.14195) 351 : cluster [DBG] pgmap v183: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:08.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:08 vm05 bash[17520]: cluster 2026-03-10T07:23:08.293409+0000 mgr.vm05.wnsmpp (mgr.14195) 351 : cluster [DBG] pgmap v183: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:08.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:08 vm05 bash[17520]: cluster 2026-03-10T07:23:08.293409+0000 mgr.vm05.wnsmpp (mgr.14195) 351 : cluster [DBG] pgmap v183: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:09.824 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:23:09.936 INFO:teuthology.orchestra.run.vm05.stdout:167 167 2026-03-10T07:23:09.987 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph orch status' 2026-03-10T07:23:11.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:11 vm09 bash[21099]: cluster 2026-03-10T07:23:10.293907+0000 mgr.vm05.wnsmpp (mgr.14195) 352 : cluster [DBG] pgmap v184: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:11.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:11 vm09 bash[21099]: cluster 2026-03-10T07:23:10.293907+0000 mgr.vm05.wnsmpp (mgr.14195) 352 : cluster [DBG] pgmap v184: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:11.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:11 vm05 bash[17520]: cluster 2026-03-10T07:23:10.293907+0000 mgr.vm05.wnsmpp (mgr.14195) 352 : cluster [DBG] pgmap v184: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:11.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:11 vm05 bash[17520]: cluster 2026-03-10T07:23:10.293907+0000 mgr.vm05.wnsmpp (mgr.14195) 352 : cluster [DBG] pgmap v184: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:13.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:13 vm09 bash[21099]: cluster 2026-03-10T07:23:12.294270+0000 mgr.vm05.wnsmpp (mgr.14195) 353 : cluster [DBG] pgmap v185: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:13.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:13 vm09 bash[21099]: cluster 2026-03-10T07:23:12.294270+0000 mgr.vm05.wnsmpp (mgr.14195) 353 : cluster [DBG] pgmap v185: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:13.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:13 vm09 bash[21099]: audit 2026-03-10T07:23:12.677469+0000 mon.vm05 (mon.0) 949 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:23:13.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:13 vm09 bash[21099]: audit 2026-03-10T07:23:12.677469+0000 mon.vm05 (mon.0) 949 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:23:13.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:13 vm05 bash[17520]: cluster 2026-03-10T07:23:12.294270+0000 mgr.vm05.wnsmpp (mgr.14195) 353 : cluster [DBG] pgmap v185: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:13.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:13 vm05 bash[17520]: cluster 2026-03-10T07:23:12.294270+0000 mgr.vm05.wnsmpp (mgr.14195) 353 : cluster [DBG] pgmap v185: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:13.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:13 vm05 bash[17520]: audit 2026-03-10T07:23:12.677469+0000 mon.vm05 (mon.0) 949 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:23:13.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:13 vm05 bash[17520]: audit 2026-03-10T07:23:12.677469+0000 mon.vm05 (mon.0) 949 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:23:13.871 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:23:14.132 INFO:teuthology.orchestra.run.vm05.stdout:Backend: cephadm 2026-03-10T07:23:14.132 INFO:teuthology.orchestra.run.vm05.stdout:Available: Yes 2026-03-10T07:23:14.132 INFO:teuthology.orchestra.run.vm05.stdout:Paused: No 2026-03-10T07:23:14.196 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph orch ps' 2026-03-10T07:23:15.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:15 vm09 bash[21099]: audit 2026-03-10T07:23:14.132021+0000 mgr.vm05.wnsmpp (mgr.14195) 354 : audit [DBG] from='client.14854 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:15.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:15 vm09 bash[21099]: audit 2026-03-10T07:23:14.132021+0000 mgr.vm05.wnsmpp (mgr.14195) 354 : audit [DBG] from='client.14854 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:15.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:15 vm09 bash[21099]: cluster 2026-03-10T07:23:14.294746+0000 mgr.vm05.wnsmpp (mgr.14195) 355 : cluster [DBG] pgmap v186: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:15.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:15 vm09 bash[21099]: cluster 2026-03-10T07:23:14.294746+0000 mgr.vm05.wnsmpp (mgr.14195) 355 : cluster [DBG] pgmap v186: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:15.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:15 vm05 bash[17520]: audit 2026-03-10T07:23:14.132021+0000 mgr.vm05.wnsmpp (mgr.14195) 354 : audit [DBG] from='client.14854 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:15.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:15 vm05 bash[17520]: audit 2026-03-10T07:23:14.132021+0000 mgr.vm05.wnsmpp (mgr.14195) 354 : audit [DBG] from='client.14854 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:15.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:15 vm05 bash[17520]: cluster 2026-03-10T07:23:14.294746+0000 mgr.vm05.wnsmpp (mgr.14195) 355 : cluster [DBG] pgmap v186: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:15.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:15 vm05 bash[17520]: cluster 2026-03-10T07:23:14.294746+0000 mgr.vm05.wnsmpp (mgr.14195) 355 : cluster [DBG] pgmap v186: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s 2026-03-10T07:23:17.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:17 vm09 bash[21099]: cluster 2026-03-10T07:23:16.295237+0000 mgr.vm05.wnsmpp (mgr.14195) 356 : cluster [DBG] pgmap v187: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:17.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:17 vm09 bash[21099]: cluster 2026-03-10T07:23:16.295237+0000 mgr.vm05.wnsmpp (mgr.14195) 356 : cluster [DBG] pgmap v187: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:17.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:17 vm05 bash[17520]: cluster 2026-03-10T07:23:16.295237+0000 mgr.vm05.wnsmpp (mgr.14195) 356 : cluster [DBG] pgmap v187: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:17.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:17 vm05 bash[17520]: cluster 2026-03-10T07:23:16.295237+0000 mgr.vm05.wnsmpp (mgr.14195) 356 : cluster [DBG] pgmap v187: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:17.926 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (5m) 27s ago 5m 14.6M - 0.25.0 c8568f914cd2 83564f78fb3d 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 *:9926 running (6m) 27s ago 6m 9404k - 19.2.3-678-ge911bdeb 654f31e6858e b1c7ad206111 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 *:9926 running (5m) 13s ago 5m 6395k - 19.2.3-678-ge911bdeb 654f31e6858e 6d763e025bef 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (6m) 27s ago 6m 7296k - 19.2.3-678-ge911bdeb 654f31e6858e eee6421fab37 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (5m) 13s ago 5m 7308k - 19.2.3-678-ge911bdeb 654f31e6858e 93d45dc69cc5 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (5m) 27s ago 5m 63.7M - 10.4.0 c8b91775d855 1d4334f91f97 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:haproxy.nfs.foo.vm05.yhprte vm05 *:2049,9002 running (49s) 27s ago 2m 3588k - 2.3.17-d1c9119 e85424b0d443 7adcd637ba7d 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:haproxy.nfs.foo.vm09.etnbzh vm09 *:2049,9002 running (18s) 13s ago 2m 3487k - 2.3.17-d1c9119 e85424b0d443 5869cf6cb4b1 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:keepalived.nfs.foo.vm05.zypjfy vm05 running (2m) 27s ago 2m 2488k - 2.2.4 4a3a1ff181d9 08e5fde24905 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:keepalived.nfs.foo.vm09.ydtazh vm09 running (2m) 13s ago 2m 2480k - 2.2.4 4a3a1ff181d9 e19ceace0871 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:mds.foofs.vm05.oxovsp vm05 running (2m) 27s ago 2m 16.6M - 19.2.3-678-ge911bdeb 654f31e6858e 0858e13b4e1a 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:mds.foofs.vm09.kuyylf vm09 running (2m) 13s ago 2m 12.9M - 19.2.3-678-ge911bdeb 654f31e6858e 05c54d5c0d24 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.wnsmpp vm05 *:9283,8765,8443 running (6m) 27s ago 6m 543M - 19.2.3-678-ge911bdeb 654f31e6858e 7e456e14e1b3 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.rfdvwa vm09 *:8443,9283,8765 running (5m) 13s ago 5m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 77bbabd48a81 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (6m) 27s ago 6m 49.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 9a36265d35f0 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (5m) 13s ago 5m 39.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e a99639a157b8 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:nfs.foo.0.1.vm05.etqrmm vm05 *:12049 running (114s) 27s ago 115s 51.0M - 5.9 654f31e6858e 0e42589c4f44 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:nfs.foo.1.1.vm09.diytrs vm09 *:12049 running (113s) 13s ago 114s 47.4M - 5.9 654f31e6858e 18ff3d8135b4 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (5m) 27s ago 6m 7743k - 1.7.0 72c9c2088986 4f78d5630475 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (5m) 13s ago 5m 7468k - 1.7.0 72c9c2088986 a137075cccbf 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm09 running (4m) 13s ago 4m 61.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 265c2a142782 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (4m) 27s ago 4m 61.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c17e07c89163 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm09 running (4m) 13s ago 4m 60.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e ef7f7be900e3 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (4m) 27s ago 4m 64.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 89e8deae7ef3 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (4m) 13s ago 4m 60.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 75a9e4910012 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm05 running (4m) 27s ago 4m 40.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e febb1912b095 2026-03-10T07:23:18.200 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm09 running (4m) 13s ago 4m 62.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e a323290ae613 2026-03-10T07:23:18.201 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm05 running (4m) 27s ago 4m 42.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 527bf6f7a638 2026-03-10T07:23:18.201 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (107s) 27s ago 5m 38.4M - 2.51.0 1d3b7f56885b 71c6978bef3f 2026-03-10T07:23:18.259 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph orch ls' 2026-03-10T07:23:18.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:18 vm09 bash[21099]: cluster 2026-03-10T07:23:17.409224+0000 mon.vm05 (mon.0) 950 : cluster [DBG] mgrmap e20: vm05.wnsmpp(active, since 5m), standbys: vm09.rfdvwa 2026-03-10T07:23:18.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:18 vm09 bash[21099]: cluster 2026-03-10T07:23:17.409224+0000 mon.vm05 (mon.0) 950 : cluster [DBG] mgrmap e20: vm05.wnsmpp(active, since 5m), standbys: vm09.rfdvwa 2026-03-10T07:23:18.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:18 vm05 bash[17520]: cluster 2026-03-10T07:23:17.409224+0000 mon.vm05 (mon.0) 950 : cluster [DBG] mgrmap e20: vm05.wnsmpp(active, since 5m), standbys: vm09.rfdvwa 2026-03-10T07:23:18.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:18 vm05 bash[17520]: cluster 2026-03-10T07:23:17.409224+0000 mon.vm05 (mon.0) 950 : cluster [DBG] mgrmap e20: vm05.wnsmpp(active, since 5m), standbys: vm09.rfdvwa 2026-03-10T07:23:19.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:19 vm05 bash[17520]: audit 2026-03-10T07:23:18.194676+0000 mgr.vm05.wnsmpp (mgr.14195) 357 : audit [DBG] from='client.14858 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:19.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:19 vm05 bash[17520]: audit 2026-03-10T07:23:18.194676+0000 mgr.vm05.wnsmpp (mgr.14195) 357 : audit [DBG] from='client.14858 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:19.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:19 vm05 bash[17520]: cluster 2026-03-10T07:23:18.295701+0000 mgr.vm05.wnsmpp (mgr.14195) 358 : cluster [DBG] pgmap v188: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:19.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:19 vm05 bash[17520]: cluster 2026-03-10T07:23:18.295701+0000 mgr.vm05.wnsmpp (mgr.14195) 358 : cluster [DBG] pgmap v188: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:19.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:19 vm09 bash[21099]: audit 2026-03-10T07:23:18.194676+0000 mgr.vm05.wnsmpp (mgr.14195) 357 : audit [DBG] from='client.14858 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:19.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:19 vm09 bash[21099]: audit 2026-03-10T07:23:18.194676+0000 mgr.vm05.wnsmpp (mgr.14195) 357 : audit [DBG] from='client.14858 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:19.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:19 vm09 bash[21099]: cluster 2026-03-10T07:23:18.295701+0000 mgr.vm05.wnsmpp (mgr.14195) 358 : cluster [DBG] pgmap v188: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:19.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:19 vm09 bash[21099]: cluster 2026-03-10T07:23:18.295701+0000 mgr.vm05.wnsmpp (mgr.14195) 358 : cluster [DBG] pgmap v188: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:21.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:21 vm09 bash[21099]: cluster 2026-03-10T07:23:20.296187+0000 mgr.vm05.wnsmpp (mgr.14195) 359 : cluster [DBG] pgmap v189: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:21.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:21 vm09 bash[21099]: cluster 2026-03-10T07:23:20.296187+0000 mgr.vm05.wnsmpp (mgr.14195) 359 : cluster [DBG] pgmap v189: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:21.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:21 vm05 bash[17520]: cluster 2026-03-10T07:23:20.296187+0000 mgr.vm05.wnsmpp (mgr.14195) 359 : cluster [DBG] pgmap v189: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:21.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:21 vm05 bash[17520]: cluster 2026-03-10T07:23:20.296187+0000 mgr.vm05.wnsmpp (mgr.14195) 359 : cluster [DBG] pgmap v189: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:21.973 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:23:22.262 INFO:teuthology.orchestra.run.vm05.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T07:23:22.263 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager ?:9093,9094 1/1 31s ago 6m count:1 2026-03-10T07:23:22.263 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter ?:9926 2/2 31s ago 6m * 2026-03-10T07:23:22.263 INFO:teuthology.orchestra.run.vm05.stdout:crash 2/2 31s ago 6m * 2026-03-10T07:23:22.263 INFO:teuthology.orchestra.run.vm05.stdout:grafana ?:3000 1/1 31s ago 6m count:1 2026-03-10T07:23:22.263 INFO:teuthology.orchestra.run.vm05.stdout:ingress.nfs.foo 12.12.1.105:2049,9002 4/4 31s ago 2m count:2 2026-03-10T07:23:22.263 INFO:teuthology.orchestra.run.vm05.stdout:mds.foofs 2/2 31s ago 2m count:2 2026-03-10T07:23:22.263 INFO:teuthology.orchestra.run.vm05.stdout:mgr 2/2 31s ago 6m count:2 2026-03-10T07:23:22.263 INFO:teuthology.orchestra.run.vm05.stdout:mon 2/2 31s ago 5m vm05:192.168.123.105=vm05;vm09:192.168.123.109=vm09;count:2 2026-03-10T07:23:22.263 INFO:teuthology.orchestra.run.vm05.stdout:nfs.foo ?:12049 2/2 31s ago 2m count:2 2026-03-10T07:23:22.263 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter ?:9100 2/2 31s ago 6m * 2026-03-10T07:23:22.263 INFO:teuthology.orchestra.run.vm05.stdout:osd.all-available-devices 8 31s ago 5m * 2026-03-10T07:23:22.263 INFO:teuthology.orchestra.run.vm05.stdout:prometheus ?:9095 1/1 31s ago 6m count:1 2026-03-10T07:23:22.324 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph orch host ls' 2026-03-10T07:23:23.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:23 vm05 bash[17520]: audit 2026-03-10T07:23:22.260467+0000 mgr.vm05.wnsmpp (mgr.14195) 360 : audit [DBG] from='client.14862 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:23.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:23 vm05 bash[17520]: audit 2026-03-10T07:23:22.260467+0000 mgr.vm05.wnsmpp (mgr.14195) 360 : audit [DBG] from='client.14862 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:23.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:23 vm05 bash[17520]: cluster 2026-03-10T07:23:22.296578+0000 mgr.vm05.wnsmpp (mgr.14195) 361 : cluster [DBG] pgmap v190: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:23.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:23 vm05 bash[17520]: cluster 2026-03-10T07:23:22.296578+0000 mgr.vm05.wnsmpp (mgr.14195) 361 : cluster [DBG] pgmap v190: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:23.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:23 vm09 bash[21099]: audit 2026-03-10T07:23:22.260467+0000 mgr.vm05.wnsmpp (mgr.14195) 360 : audit [DBG] from='client.14862 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:23.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:23 vm09 bash[21099]: audit 2026-03-10T07:23:22.260467+0000 mgr.vm05.wnsmpp (mgr.14195) 360 : audit [DBG] from='client.14862 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:23.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:23 vm09 bash[21099]: cluster 2026-03-10T07:23:22.296578+0000 mgr.vm05.wnsmpp (mgr.14195) 361 : cluster [DBG] pgmap v190: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:23.920 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:23 vm09 bash[21099]: cluster 2026-03-10T07:23:22.296578+0000 mgr.vm05.wnsmpp (mgr.14195) 361 : cluster [DBG] pgmap v190: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:24.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:24 vm09 bash[21099]: cluster 2026-03-10T07:23:24.297051+0000 mgr.vm05.wnsmpp (mgr.14195) 362 : cluster [DBG] pgmap v191: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:24.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:24 vm09 bash[21099]: cluster 2026-03-10T07:23:24.297051+0000 mgr.vm05.wnsmpp (mgr.14195) 362 : cluster [DBG] pgmap v191: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:24.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:24 vm05 bash[17520]: cluster 2026-03-10T07:23:24.297051+0000 mgr.vm05.wnsmpp (mgr.14195) 362 : cluster [DBG] pgmap v191: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:24.961 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:24 vm05 bash[17520]: cluster 2026-03-10T07:23:24.297051+0000 mgr.vm05.wnsmpp (mgr.14195) 362 : cluster [DBG] pgmap v191: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:26.025 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:23:26.334 INFO:teuthology.orchestra.run.vm05.stdout:HOST ADDR LABELS STATUS 2026-03-10T07:23:26.334 INFO:teuthology.orchestra.run.vm05.stdout:vm05 192.168.123.105 2026-03-10T07:23:26.334 INFO:teuthology.orchestra.run.vm05.stdout:vm09 192.168.123.109 2026-03-10T07:23:26.334 INFO:teuthology.orchestra.run.vm05.stdout:2 hosts in cluster 2026-03-10T07:23:26.393 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph orch device ls' 2026-03-10T07:23:27.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:27 vm09 bash[21099]: cluster 2026-03-10T07:23:26.297423+0000 mgr.vm05.wnsmpp (mgr.14195) 363 : cluster [DBG] pgmap v192: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:27.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:27 vm09 bash[21099]: cluster 2026-03-10T07:23:26.297423+0000 mgr.vm05.wnsmpp (mgr.14195) 363 : cluster [DBG] pgmap v192: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:27.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:27 vm09 bash[21099]: audit 2026-03-10T07:23:26.333675+0000 mgr.vm05.wnsmpp (mgr.14195) 364 : audit [DBG] from='client.14866 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:27.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:27 vm09 bash[21099]: audit 2026-03-10T07:23:26.333675+0000 mgr.vm05.wnsmpp (mgr.14195) 364 : audit [DBG] from='client.14866 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:27.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:27 vm05 bash[17520]: cluster 2026-03-10T07:23:26.297423+0000 mgr.vm05.wnsmpp (mgr.14195) 363 : cluster [DBG] pgmap v192: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:27.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:27 vm05 bash[17520]: cluster 2026-03-10T07:23:26.297423+0000 mgr.vm05.wnsmpp (mgr.14195) 363 : cluster [DBG] pgmap v192: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:27.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:27 vm05 bash[17520]: audit 2026-03-10T07:23:26.333675+0000 mgr.vm05.wnsmpp (mgr.14195) 364 : audit [DBG] from='client.14866 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:27.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:27 vm05 bash[17520]: audit 2026-03-10T07:23:26.333675+0000 mgr.vm05.wnsmpp (mgr.14195) 364 : audit [DBG] from='client.14866 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:28.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:28 vm09 bash[21099]: audit 2026-03-10T07:23:27.677928+0000 mon.vm05 (mon.0) 951 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:23:28.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:28 vm09 bash[21099]: audit 2026-03-10T07:23:27.677928+0000 mon.vm05 (mon.0) 951 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:23:28.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:28 vm05 bash[17520]: audit 2026-03-10T07:23:27.677928+0000 mon.vm05 (mon.0) 951 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:23:28.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:28 vm05 bash[17520]: audit 2026-03-10T07:23:27.677928+0000 mon.vm05 (mon.0) 951 : audit [DBG] from='mgr.14195 192.168.123.105:0/188647168' entity='mgr.vm05.wnsmpp' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T07:23:29.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:29 vm09 bash[21099]: cluster 2026-03-10T07:23:28.297780+0000 mgr.vm05.wnsmpp (mgr.14195) 365 : cluster [DBG] pgmap v193: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:29.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:29 vm09 bash[21099]: cluster 2026-03-10T07:23:28.297780+0000 mgr.vm05.wnsmpp (mgr.14195) 365 : cluster [DBG] pgmap v193: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:29.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:29 vm05 bash[17520]: cluster 2026-03-10T07:23:28.297780+0000 mgr.vm05.wnsmpp (mgr.14195) 365 : cluster [DBG] pgmap v193: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:29.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:29 vm05 bash[17520]: cluster 2026-03-10T07:23:28.297780+0000 mgr.vm05.wnsmpp (mgr.14195) 365 : cluster [DBG] pgmap v193: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:30.082 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:23:30.362 INFO:teuthology.orchestra.run.vm05.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T07:23:30.363 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 2m ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T07:23:30.363 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vdb hdd DWNBRSTVMM05001 20.0G No 2m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:23:30.363 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vdc hdd DWNBRSTVMM05002 20.0G No 2m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:23:30.363 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vdd hdd DWNBRSTVMM05003 20.0G No 2m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:23:30.363 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vde hdd DWNBRSTVMM05004 20.0G No 2m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:23:30.363 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 2m ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T07:23:30.363 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vdb hdd DWNBRSTVMM09001 20.0G No 2m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:23:30.363 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vdc hdd DWNBRSTVMM09002 20.0G No 2m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:23:30.363 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vdd hdd DWNBRSTVMM09003 20.0G No 2m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:23:30.363 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vde hdd DWNBRSTVMM09004 20.0G No 2m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T07:23:30.440 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- bash -c 'ceph orch ls | grep '"'"'^osd.all-available-devices '"'"'' 2026-03-10T07:23:31.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:31 vm09 bash[21099]: cluster 2026-03-10T07:23:30.298194+0000 mgr.vm05.wnsmpp (mgr.14195) 366 : cluster [DBG] pgmap v194: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:31.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:31 vm09 bash[21099]: cluster 2026-03-10T07:23:30.298194+0000 mgr.vm05.wnsmpp (mgr.14195) 366 : cluster [DBG] pgmap v194: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:31.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:31 vm09 bash[21099]: audit 2026-03-10T07:23:30.361826+0000 mgr.vm05.wnsmpp (mgr.14195) 367 : audit [DBG] from='client.14870 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:31.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:31 vm09 bash[21099]: audit 2026-03-10T07:23:30.361826+0000 mgr.vm05.wnsmpp (mgr.14195) 367 : audit [DBG] from='client.14870 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:31 vm05 bash[17520]: cluster 2026-03-10T07:23:30.298194+0000 mgr.vm05.wnsmpp (mgr.14195) 366 : cluster [DBG] pgmap v194: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:31 vm05 bash[17520]: cluster 2026-03-10T07:23:30.298194+0000 mgr.vm05.wnsmpp (mgr.14195) 366 : cluster [DBG] pgmap v194: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 341 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T07:23:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:31 vm05 bash[17520]: audit 2026-03-10T07:23:30.361826+0000 mgr.vm05.wnsmpp (mgr.14195) 367 : audit [DBG] from='client.14870 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:31.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:31 vm05 bash[17520]: audit 2026-03-10T07:23:30.361826+0000 mgr.vm05.wnsmpp (mgr.14195) 367 : audit [DBG] from='client.14870 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:33.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:33 vm09 bash[21099]: cluster 2026-03-10T07:23:32.298595+0000 mgr.vm05.wnsmpp (mgr.14195) 368 : cluster [DBG] pgmap v195: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:33.670 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:33 vm09 bash[21099]: cluster 2026-03-10T07:23:32.298595+0000 mgr.vm05.wnsmpp (mgr.14195) 368 : cluster [DBG] pgmap v195: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:33.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:33 vm05 bash[17520]: cluster 2026-03-10T07:23:32.298595+0000 mgr.vm05.wnsmpp (mgr.14195) 368 : cluster [DBG] pgmap v195: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:33.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:33 vm05 bash[17520]: cluster 2026-03-10T07:23:32.298595+0000 mgr.vm05.wnsmpp (mgr.14195) 368 : cluster [DBG] pgmap v195: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:34.142 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:23:34.424 INFO:teuthology.orchestra.run.vm05.stdout:osd.all-available-devices 8 43s ago 5m * 2026-03-10T07:23:34.470 DEBUG:teuthology.run_tasks:Unwinding manager vip 2026-03-10T07:23:34.473 INFO:tasks.vip:Removing 12.12.0.105 (and any VIPs) on vm05.local iface ens3... 2026-03-10T07:23:34.473 DEBUG:teuthology.orchestra.run.vm05:> sudo ip addr del 12.12.0.105/22 dev ens3 2026-03-10T07:23:34.482 DEBUG:teuthology.orchestra.run.vm05:> sudo ip addr del 12.12.1.105/22 dev ens3 2026-03-10T07:23:34.532 INFO:tasks.vip:Removing 12.12.0.109 (and any VIPs) on vm09.local iface ens3... 2026-03-10T07:23:34.532 DEBUG:teuthology.orchestra.run.vm09:> sudo ip addr del 12.12.0.109/22 dev ens3 2026-03-10T07:23:34.539 DEBUG:teuthology.orchestra.run.vm09:> sudo ip addr del 12.12.1.105/22 dev ens3 2026-03-10T07:23:34.588 INFO:teuthology.orchestra.run.vm09.stderr:RTNETLINK answers: Cannot assign requested address 2026-03-10T07:23:34.589 DEBUG:teuthology.orchestra.run:got remote process result: 2 2026-03-10T07:23:34.589 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T07:23:34.591 INFO:tasks.cephadm:Teardown begin 2026-03-10T07:23:34.591 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:23:34.601 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:23:34.637 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T07:23:34.637 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0f57d3c-1c50-11f1-837e-f755e850132e -- ceph mgr module disable cephadm 2026-03-10T07:23:35.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:35 vm09 bash[21099]: cluster 2026-03-10T07:23:34.299081+0000 mgr.vm05.wnsmpp (mgr.14195) 369 : cluster [DBG] pgmap v196: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:35.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:35 vm09 bash[21099]: cluster 2026-03-10T07:23:34.299081+0000 mgr.vm05.wnsmpp (mgr.14195) 369 : cluster [DBG] pgmap v196: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:35.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:35 vm09 bash[21099]: audit 2026-03-10T07:23:34.409790+0000 mgr.vm05.wnsmpp (mgr.14195) 370 : audit [DBG] from='client.14874 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:35.669 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:35 vm09 bash[21099]: audit 2026-03-10T07:23:34.409790+0000 mgr.vm05.wnsmpp (mgr.14195) 370 : audit [DBG] from='client.14874 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:35 vm05 bash[17520]: cluster 2026-03-10T07:23:34.299081+0000 mgr.vm05.wnsmpp (mgr.14195) 369 : cluster [DBG] pgmap v196: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:35 vm05 bash[17520]: cluster 2026-03-10T07:23:34.299081+0000 mgr.vm05.wnsmpp (mgr.14195) 369 : cluster [DBG] pgmap v196: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:35 vm05 bash[17520]: audit 2026-03-10T07:23:34.409790+0000 mgr.vm05.wnsmpp (mgr.14195) 370 : audit [DBG] from='client.14874 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:35.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:35 vm05 bash[17520]: audit 2026-03-10T07:23:34.409790+0000 mgr.vm05.wnsmpp (mgr.14195) 370 : audit [DBG] from='client.14874 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:23:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:37 vm05 bash[17520]: cluster 2026-03-10T07:23:36.299536+0000 mgr.vm05.wnsmpp (mgr.14195) 371 : cluster [DBG] pgmap v197: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:37.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:37 vm05 bash[17520]: cluster 2026-03-10T07:23:36.299536+0000 mgr.vm05.wnsmpp (mgr.14195) 371 : cluster [DBG] pgmap v197: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:37.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:37 vm09 bash[21099]: cluster 2026-03-10T07:23:36.299536+0000 mgr.vm05.wnsmpp (mgr.14195) 371 : cluster [DBG] pgmap v197: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:37.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:37 vm09 bash[21099]: cluster 2026-03-10T07:23:36.299536+0000 mgr.vm05.wnsmpp (mgr.14195) 371 : cluster [DBG] pgmap v197: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:39.308 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/mon.vm05/config 2026-03-10T07:23:39.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T07:23:39.451+0000 7f403818c640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T07:23:39.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T07:23:39.451+0000 7f403818c640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T07:23:39.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T07:23:39.451+0000 7f403818c640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T07:23:39.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T07:23:39.451+0000 7f403818c640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T07:23:39.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T07:23:39.451+0000 7f403818c640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T07:23:39.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T07:23:39.451+0000 7f403818c640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T07:23:39.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T07:23:39.451+0000 7f403818c640 -1 monclient: keyring not found 2026-03-10T07:23:39.457 INFO:teuthology.orchestra.run.vm05.stderr:[errno 21] error connecting to the cluster 2026-03-10T07:23:39.509 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:23:39.509 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T07:23:39.509 DEBUG:teuthology.orchestra.run.vm05:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T07:23:39.512 DEBUG:teuthology.orchestra.run.vm09:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T07:23:39.515 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T07:23:39.515 INFO:tasks.cephadm.mon.vm05:Stopping mon.vm05... 2026-03-10T07:23:39.515 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mon.vm05 2026-03-10T07:23:39.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:39 vm05 bash[17520]: cluster 2026-03-10T07:23:38.300050+0000 mgr.vm05.wnsmpp (mgr.14195) 372 : cluster [DBG] pgmap v198: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:39.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:39 vm05 bash[17520]: cluster 2026-03-10T07:23:38.300050+0000 mgr.vm05.wnsmpp (mgr.14195) 372 : cluster [DBG] pgmap v198: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:39.610 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:39 vm05 systemd[1]: Stopping Ceph mon.vm05 for f0f57d3c-1c50-11f1-837e-f755e850132e... 2026-03-10T07:23:39.843 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:39 vm05 bash[17520]: debug 2026-03-10T07:23:39.599+0000 7fdc0e852640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.vm05 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T07:23:39.843 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:39 vm05 bash[17520]: debug 2026-03-10T07:23:39.599+0000 7fdc0e852640 -1 mon.vm05@0(leader) e2 *** Got Signal Terminated *** 2026-03-10T07:23:39.890 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 07:23:39 vm05 bash[61485]: ceph-f0f57d3c-1c50-11f1-837e-f755e850132e-mon-vm05 2026-03-10T07:23:39.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:39 vm09 bash[21099]: cluster 2026-03-10T07:23:38.300050+0000 mgr.vm05.wnsmpp (mgr.14195) 372 : cluster [DBG] pgmap v198: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:39.919 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:39 vm09 bash[21099]: cluster 2026-03-10T07:23:38.300050+0000 mgr.vm05.wnsmpp (mgr.14195) 372 : cluster [DBG] pgmap v198: 97 pgs: 97 active+clean; 480 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T07:23:39.989 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mon.vm05.service' 2026-03-10T07:23:40.031 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T07:23:40.032 INFO:tasks.cephadm.mon.vm05:Stopped mon.vm05 2026-03-10T07:23:40.032 INFO:tasks.cephadm.mon.vm09:Stopping mon.vm09... 2026-03-10T07:23:40.032 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mon.vm09 2026-03-10T07:23:40.313 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:40 vm09 systemd[1]: Stopping Ceph mon.vm09 for f0f57d3c-1c50-11f1-837e-f755e850132e... 2026-03-10T07:23:40.313 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:40 vm09 bash[21099]: debug 2026-03-10T07:23:40.140+0000 7f9636930640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.vm09 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T07:23:40.313 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:40 vm09 bash[21099]: debug 2026-03-10T07:23:40.140+0000 7f9636930640 -1 mon.vm09@1(peon) e2 *** Got Signal Terminated *** 2026-03-10T07:23:40.313 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 07:23:40 vm09 bash[42817]: ceph-f0f57d3c-1c50-11f1-837e-f755e850132e-mon-vm09 2026-03-10T07:23:40.315 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0f57d3c-1c50-11f1-837e-f755e850132e@mon.vm09.service' 2026-03-10T07:23:40.334 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T07:23:40.334 INFO:tasks.cephadm.mon.vm09:Stopped mon.vm09 2026-03-10T07:23:40.334 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f0f57d3c-1c50-11f1-837e-f755e850132e --force --keep-logs 2026-03-10T07:23:40.440 INFO:teuthology.orchestra.run.vm05.stdout:Deleting cluster with fsid: f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:24:28.845 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f0f57d3c-1c50-11f1-837e-f755e850132e --force --keep-logs 2026-03-10T07:24:28.935 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:25:11.607 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:25:11.614 INFO:teuthology.orchestra.run.vm05.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-10T07:25:11.614 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:25:11.614 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:25:11.622 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T07:25:11.622 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/944/remote/vm05/crash 2026-03-10T07:25:11.622 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/crash -- . 2026-03-10T07:25:11.665 INFO:teuthology.orchestra.run.vm05.stderr:tar: /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/crash: Cannot open: No such file or directory 2026-03-10T07:25:11.665 INFO:teuthology.orchestra.run.vm05.stderr:tar: Error is not recoverable: exiting now 2026-03-10T07:25:11.665 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/944/remote/vm09/crash 2026-03-10T07:25:11.666 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/crash -- . 2026-03-10T07:25:11.674 INFO:teuthology.orchestra.run.vm09.stderr:tar: /var/lib/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/crash: Cannot open: No such file or directory 2026-03-10T07:25:11.674 INFO:teuthology.orchestra.run.vm09.stderr:tar: Error is not recoverable: exiting now 2026-03-10T07:25:11.674 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T07:25:11.674 DEBUG:teuthology.orchestra.run.vm05:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_DAEMON_PLACE_FAIL | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-10T07:25:11.717 INFO:tasks.cephadm:Compressing logs... 2026-03-10T07:25:11.717 DEBUG:teuthology.orchestra.run.vm05:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T07:25:11.759 DEBUG:teuthology.orchestra.run.vm09:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T07:25:11.768 INFO:teuthology.orchestra.run.vm09.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T07:25:11.768 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T07:25:11.768 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mgr.vm09.rfdvwa.log 2026-03-10T07:25:11.768 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.log 2026-03-10T07:25:11.768 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T07:25:11.768 INFO:teuthology.orchestra.run.vm05.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T07:25:11.768 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.3.log 2026-03-10T07:25:11.768 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.log 2026-03-10T07:25:11.769 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mgr.vm09.rfdvwa.log: 89.8% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T07:25:11.769 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.6.log 2026-03-10T07:25:11.770 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.log: 87.1% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.log.gz 2026-03-10T07:25:11.771 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mds.foofs.vm09.kuyylf.log 2026-03-10T07:25:11.771 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.6.log: 92.1% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mgr.vm09.rfdvwa.log.gz 2026-03-10T07:25:11.771 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.2.log 2026-03-10T07:25:11.772 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mds.foofs.vm09.kuyylf.log: 82.5% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mds.foofs.vm09.kuyylf.log.gz 2026-03-10T07:25:11.772 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.audit.log 2026-03-10T07:25:11.779 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-volume.log 2026-03-10T07:25:11.780 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.audit.log: 91.4% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.audit.log.gz 2026-03-10T07:25:11.782 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mds.foofs.vm05.oxovsp.log 2026-03-10T07:25:11.783 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.log: 87.0% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.log.gz 2026-03-10T07:25:11.783 INFO:teuthology.orchestra.run.vm05.stderr: 91.1% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T07:25:11.783 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mgr.vm05.wnsmpp.log 2026-03-10T07:25:11.783 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mds.foofs.vm05.oxovsp.log: 79.9% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mds.foofs.vm05.oxovsp.log.gz 2026-03-10T07:25:11.784 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.1.log 2026-03-10T07:25:11.787 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mon.vm09.log 2026-03-10T07:25:11.794 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mgr.vm05.wnsmpp.log: gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.5.log 2026-03-10T07:25:11.802 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.7.log 2026-03-10T07:25:11.803 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-client.ceph-exporter.vm09.log 2026-03-10T07:25:11.807 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mon.vm09.log: 96.2% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-volume.log.gz 2026-03-10T07:25:11.810 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mon.vm05.log 2026-03-10T07:25:11.810 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.cephadm.log 2026-03-10T07:25:11.811 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-client.ceph-exporter.vm09.log: 30.2% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-client.ceph-exporter.vm09.log.gz 2026-03-10T07:25:11.818 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.7.log: gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.audit.log 2026-03-10T07:25:11.823 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.4.log 2026-03-10T07:25:11.823 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.cephadm.log: 83.0% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.cephadm.log.gz 2026-03-10T07:25:11.826 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mon.vm05.log: gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-volume.log 2026-03-10T07:25:11.830 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.cephadm.log 2026-03-10T07:25:11.830 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.0.log 2026-03-10T07:25:11.831 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-volume.log: 91.2% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.audit.log.gz 2026-03-10T07:25:11.834 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-client.ceph-exporter.vm05.log 2026-03-10T07:25:11.838 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.cephadm.log: 83.4% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph.cephadm.log.gz 2026-03-10T07:25:11.847 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-client.ceph-exporter.vm05.log: 94.1% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-client.ceph-exporter.vm05.log.gz 2026-03-10T07:25:11.858 INFO:teuthology.orchestra.run.vm05.stderr: 96.2% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-volume.log.gz 2026-03-10T07:25:11.955 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.4.log: /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.0.log: 93.3% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.6.log.gz 2026-03-10T07:25:11.955 INFO:teuthology.orchestra.run.vm09.stderr: 92.5% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mon.vm09.log.gz 2026-03-10T07:25:11.986 INFO:teuthology.orchestra.run.vm09.stderr: 93.2% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.2.log.gz 2026-03-10T07:25:11.987 INFO:teuthology.orchestra.run.vm09.stderr: 93.3% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.4.log.gz 2026-03-10T07:25:12.025 INFO:teuthology.orchestra.run.vm05.stderr: 93.6% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.3.log.gz 2026-03-10T07:25:12.031 INFO:teuthology.orchestra.run.vm09.stderr: 93.2% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.0.log.gz 2026-03-10T07:25:12.033 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-10T07:25:12.033 INFO:teuthology.orchestra.run.vm09.stderr:real 0m0.271s 2026-03-10T07:25:12.033 INFO:teuthology.orchestra.run.vm09.stderr:user 0m0.453s 2026-03-10T07:25:12.033 INFO:teuthology.orchestra.run.vm09.stderr:sys 0m0.037s 2026-03-10T07:25:12.042 INFO:teuthology.orchestra.run.vm05.stderr: 93.3% 93.4% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.5.log.gz 2026-03-10T07:25:12.042 INFO:teuthology.orchestra.run.vm05.stderr: -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.1.log.gz 2026-03-10T07:25:12.044 INFO:teuthology.orchestra.run.vm05.stderr: 89.7% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mgr.vm05.wnsmpp.log.gz 2026-03-10T07:25:12.065 INFO:teuthology.orchestra.run.vm05.stderr: 93.4% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-osd.7.log.gz 2026-03-10T07:25:12.146 INFO:teuthology.orchestra.run.vm05.stderr: 91.1% -- replaced with /var/log/ceph/f0f57d3c-1c50-11f1-837e-f755e850132e/ceph-mon.vm05.log.gz 2026-03-10T07:25:12.147 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T07:25:12.147 INFO:teuthology.orchestra.run.vm05.stderr:real 0m0.386s 2026-03-10T07:25:12.147 INFO:teuthology.orchestra.run.vm05.stderr:user 0m0.614s 2026-03-10T07:25:12.147 INFO:teuthology.orchestra.run.vm05.stderr:sys 0m0.069s 2026-03-10T07:25:12.147 INFO:tasks.cephadm:Archiving logs... 2026-03-10T07:25:12.148 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/944/remote/vm05/log 2026-03-10T07:25:12.148 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T07:25:12.236 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/944/remote/vm09/log 2026-03-10T07:25:12.236 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T07:25:12.271 INFO:tasks.cephadm:Removing cluster... 2026-03-10T07:25:12.272 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f0f57d3c-1c50-11f1-837e-f755e850132e --force 2026-03-10T07:25:12.372 INFO:teuthology.orchestra.run.vm05.stdout:Deleting cluster with fsid: f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:25:13.445 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f0f57d3c-1c50-11f1-837e-f755e850132e --force 2026-03-10T07:25:13.547 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: f0f57d3c-1c50-11f1-837e-f755e850132e 2026-03-10T07:25:14.614 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T07:25:14.614 DEBUG:teuthology.orchestra.run.vm05:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T07:25:14.618 DEBUG:teuthology.orchestra.run.vm09:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T07:25:14.621 INFO:tasks.cephadm:Teardown complete 2026-03-10T07:25:14.621 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T07:25:14.624 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T07:25:14.624 DEBUG:teuthology.orchestra.run.vm05:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T07:25:14.659 DEBUG:teuthology.orchestra.run.vm09:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:============================================================================== 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:#sid.f5s.de 131.188.3.220 2 u 39 64 377 25.043 -0.772 2.931 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:#web80.weingaert 130.149.17.21 2 u 27 64 377 28.264 -3.525 0.606 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:-butterfly.post- 124.216.164.14 2 u 26 64 377 28.687 +0.000 0.752 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:+time.cloudflare 10.125.9.225 3 u 31 64 377 20.417 +1.138 0.772 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:#stratum2-3.NTP. 129.70.137.82 2 u 39 64 277 30.567 -2.585 3.597 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:*77.90.0.148 (14 131.188.3.220 2 u 31 64 377 22.744 +0.762 0.714 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:-static.215.156. 35.73.197.144 2 u 32 64 377 23.557 -0.722 0.833 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:-ntp1.rrze.uni-e .DCFp. 1 u 32 64 377 26.243 -1.217 0.709 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:-red-pelican-637 79.133.44.136 2 u 23 64 377 30.479 +0.104 0.607 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:-ns1.blazing.de 213.172.96.14 3 u 28 64 377 31.871 -1.204 0.759 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:#zeus.f5s.de 131.188.3.220 2 u 45 64 377 25.019 +6.926 7.428 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:#185.13.148.71 79.133.44.146 2 u 26 64 377 31.950 -0.108 0.726 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:-185.125.190.56 79.243.60.50 2 u 44 64 377 36.685 -0.959 0.681 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:+ntp2.m-online.n 212.18.1.106 2 u 28 64 377 43.582 +0.330 0.749 2026-03-10T07:25:15.145 INFO:teuthology.orchestra.run.vm05.stdout:#tor.nocabal.de 131.188.3.220 2 u 32 64 377 25.107 -0.910 0.608 2026-03-10T07:25:15.146 INFO:teuthology.orchestra.run.vm05.stdout:#hermes.linxx.pa 185.131.196.23 2 u 30 64 377 28.278 -4.894 0.705 2026-03-10T07:25:15.146 INFO:teuthology.orchestra.run.vm05.stdout:-185.125.190.57 194.121.207.249 2 u 49 64 377 36.504 -1.731 0.712 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:============================================================================== 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:+web80.weingaert 130.149.17.21 2 u 31 64 377 28.860 -3.395 3.054 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:+time.cloudflare 10.124.8.190 3 u 28 64 377 20.436 +0.527 3.064 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:+red-pelican-637 79.133.44.136 2 u 36 64 377 30.825 -0.167 3.289 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:+sid.f5s.de 131.188.3.220 2 u 38 64 377 25.126 -0.438 2.663 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:+zeus.f5s.de 131.188.3.220 2 u 40 64 377 25.190 -0.436 2.707 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:+ntp2.m-online.n 212.18.1.106 2 u 26 64 377 43.586 +0.899 3.031 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:+77.90.0.148 (14 131.188.3.220 2 u 30 64 377 22.954 +0.373 3.032 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:+tor.nocabal.de 131.188.3.220 2 u 27 64 377 25.248 -0.610 3.052 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:+butterfly.post- 124.216.164.14 2 u 31 64 377 28.916 -0.490 3.025 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:+stratum2-3.NTP. 129.70.137.82 2 u 45 64 277 30.549 -2.105 2.319 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:+172-104-154-182 81.104.22.229 2 u 31 64 377 23.156 -5.652 3.561 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:*ntp1.rrze.uni-e .DCFp. 1 u 30 64 377 26.165 -1.469 2.981 2026-03-10T07:25:15.185 INFO:teuthology.orchestra.run.vm09.stdout:+ns1.blazing.de 213.172.96.14 3 u 28 64 377 32.031 -0.635 3.007 2026-03-10T07:25:15.186 INFO:teuthology.orchestra.run.vm09.stdout:+alphyn.canonica 132.163.96.1 2 u 49 64 277 100.832 -2.819 2.735 2026-03-10T07:25:15.186 INFO:teuthology.orchestra.run.vm09.stdout:+static.215.156. 35.73.197.144 2 u 27 64 377 23.708 -0.292 2.969 2026-03-10T07:25:15.186 INFO:teuthology.orchestra.run.vm09.stdout:#hermes.linxx.pa 185.131.196.23 2 u 21 64 377 28.335 -4.268 2.846 2026-03-10T07:25:15.186 INFO:teuthology.orchestra.run.vm09.stdout:+185.125.190.56 79.243.60.50 2 u 49 64 377 32.208 -0.268 2.997 2026-03-10T07:25:15.186 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T07:25:15.189 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T07:25:15.189 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T07:25:15.192 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T07:25:15.194 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T07:25:15.197 INFO:teuthology.task.internal:Duration was 721.776391 seconds 2026-03-10T07:25:15.197 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T07:25:15.200 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T07:25:15.200 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T07:25:15.201 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T07:25:15.234 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T07:25:15.234 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm05.local 2026-03-10T07:25:15.234 DEBUG:teuthology.orchestra.run.vm05:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T07:25:15.287 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm09.local 2026-03-10T07:25:15.287 DEBUG:teuthology.orchestra.run.vm09:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T07:25:15.300 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T07:25:15.300 DEBUG:teuthology.orchestra.run.vm05:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T07:25:15.331 DEBUG:teuthology.orchestra.run.vm09:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T07:25:15.394 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T07:25:15.394 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T07:25:15.395 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T07:25:15.402 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T07:25:15.402 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T07:25:15.402 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T07:25:15.402 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T07:25:15.403 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0%/home/ubuntu/cephtest/archive/syslog/journalctl.log: -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T07:25:15.403 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T07:25:15.403 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T07:25:15.403 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T07:25:15.403 INFO:teuthology.orchestra.run.vm09.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T07:25:15.404 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T07:25:15.414 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 90.8% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T07:25:15.416 INFO:teuthology.orchestra.run.vm05.stderr: 90.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T07:25:15.417 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T07:25:15.421 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T07:25:15.421 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T07:25:15.466 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T07:25:15.473 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T07:25:15.477 DEBUG:teuthology.orchestra.run.vm05:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T07:25:15.507 DEBUG:teuthology.orchestra.run.vm09:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T07:25:15.512 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = core 2026-03-10T07:25:15.520 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = core 2026-03-10T07:25:15.529 DEBUG:teuthology.orchestra.run.vm05:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T07:25:15.565 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:25:15.565 DEBUG:teuthology.orchestra.run.vm09:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T07:25:15.573 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:25:15.573 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T07:25:15.576 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T07:25:15.576 DEBUG:teuthology.misc:Transferring archived files from vm05:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/944/remote/vm05 2026-03-10T07:25:15.576 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T07:25:15.615 DEBUG:teuthology.misc:Transferring archived files from vm09:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/944/remote/vm09 2026-03-10T07:25:15.615 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T07:25:15.623 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T07:25:15.623 DEBUG:teuthology.orchestra.run.vm05:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T07:25:15.659 DEBUG:teuthology.orchestra.run.vm09:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T07:25:15.669 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T07:25:15.672 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T07:25:15.672 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T07:25:15.675 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T07:25:15.675 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T07:25:15.703 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T07:25:15.705 INFO:teuthology.orchestra.run.vm05.stdout: 258077 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 07:25 /home/ubuntu/cephtest 2026-03-10T07:25:15.713 INFO:teuthology.orchestra.run.vm09.stdout: 258078 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 07:25 /home/ubuntu/cephtest 2026-03-10T07:25:15.714 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T07:25:15.720 INFO:teuthology.run:Summary data: description: orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 1-start 2-services/nfs-ingress 3-final} duration: 721.7763912677765 owner: kyr success: true 2026-03-10T07:25:15.720 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T07:25:15.741 INFO:teuthology.run:pass